1
|
Bennett MA, Petro LS, Abbatecola C, Muckli LF. Retinotopic biases in contextual feedback signals to V1 for object and scene processing. CURRENT RESEARCH IN NEUROBIOLOGY 2025; 8:100143. [PMID: 39810940 PMCID: PMC11731975 DOI: 10.1016/j.crneur.2024.100143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 09/30/2024] [Accepted: 10/08/2024] [Indexed: 01/16/2025] Open
Abstract
Identifying the objects embedded in natural scenes relies on recurrent processing between lower and higher visual areas. How is cortical feedback information related to objects and scenes organised in lower visual areas? The spatial organisation of cortical feedback converging in early visual cortex during object and scene processing could be retinotopically specific as it is coded in V1, or object centred as coded in higher areas, or both. Here, we characterise object and scene-related feedback information to V1. Participants identified foreground objects or background scenes in images with occluded central and peripheral subsections, allowing us to isolate feedback activity to foveal and peripheral regions of V1. Using fMRI and multivoxel pattern classification, we found that background scene information is projected to both foveal and peripheral V1 but can be disrupted in the fovea by a sufficiently demanding object discrimination task, during which we found evidence of foveal object decoding when using naturalistic stimuli. We suggest that the feedback connections during scene perception project back to earlier visual areas an automatic sketch of occluded information to the predicted retinotopic location. In the case of a cognitive task however, feedback pathways project content to foveal retinotopic space, potentially for introspection, functioning as a cognitive active blackboard and not necessarily predicting the object's location. This feedback architecture could reflect the internal mapping in V1 of the brain's endogenous models of the visual environment that are used to predict perceptual inputs.
Collapse
Affiliation(s)
- Matthew A. Bennett
- Institute of Neuroscience, Université Catholique de Louvain, Place Cardinal Mercier 10/L3.05.01, 1348, Louvain-la-Neuve, Belgium
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, 62 Hillhead Street, Glasgow, G12 8QB, United Kingdom
| | - Lucy S. Petro
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, 62 Hillhead Street, Glasgow, G12 8QB, United Kingdom
- Imaging Centre of Excellence, College of Medical, Veterinary and Life Sciences, University of Glasgow and Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Clement Abbatecola
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, 62 Hillhead Street, Glasgow, G12 8QB, United Kingdom
- Imaging Centre of Excellence, College of Medical, Veterinary and Life Sciences, University of Glasgow and Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Lars F. Muckli
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, 62 Hillhead Street, Glasgow, G12 8QB, United Kingdom
- Imaging Centre of Excellence, College of Medical, Veterinary and Life Sciences, University of Glasgow and Queen Elizabeth University Hospital, Glasgow, United Kingdom
| |
Collapse
|
2
|
Matuszewski J, Bola Ł, Collignon O, Marchewka A. Similar Computational Hierarchies for Reading and Speech in the Occipital Cortex of Sighed and Blind: Converging Evidence from fMRI and Chronometric TMS. J Neurosci 2025; 45:e1153242024. [PMID: 40032525 PMCID: PMC12079739 DOI: 10.1523/jneurosci.1153-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 12/17/2024] [Accepted: 12/23/2024] [Indexed: 03/05/2025] Open
Abstract
High-level perception results from interactions between hierarchical brain systems responsive to gradually increasing feature complexities. During reading, the initial evaluation of simple visual features in the early visual cortex (EVC) is followed by orthographic and lexical computations in the ventral occipitotemporal cortex (vOTC). While similar visual regions are engaged in tactile Braille reading in congenitally blind people, it is unclear whether the visual network maintains or reorganizes its hierarchy for reading in this population. Combining fMRI and chronometric transcranial magnetic stimulation (TMS), our study revealed a clear correspondence between sighted and blind individuals (both male and female) on how their occipital cortices functionally supports reading and speech processing. Using fMRI, we first observed that vOTC, but not EVC, showed an enhanced response to lexical vs nonlexical information in both groups and sensory modalities. Using TMS, we further found that, in both groups, the processing of written words and pseudowords was disrupted by the EVC stimulation at both early and late time windows. In contrast, the vOTC stimulation disrupted the processing of these written stimuli only when applied at late time windows, again in both groups. In the speech domain, we observed TMS effects only for meaningful words and only in the blind participants. Overall, our results suggest that, while the responses in the deprived visual areas might extend their functional response to other sensory modalities, the computational gradients between early and higher-order occipital regions are retained, at least for reading.
Collapse
Affiliation(s)
- Jacek Matuszewski
- Crossmodal Perception and Plasticity Lab, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-la-Neuve 1348, Belgium
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw 02-093, Poland
| | - Łukasz Bola
- Institute of Psychology, Polish Academy of Sciences, Warsaw 00-378, Poland
| | - Olivier Collignon
- Crossmodal Perception and Plasticity Lab, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-la-Neuve 1348, Belgium
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne 1011, Switzerland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw 02-093, Poland
| |
Collapse
|
3
|
Thunell E, Peter M, Iravani B, Porada DK, Prenner K, Darki F, Lundström JN. Unisensory visual and auditory objects are processed in olfactory cortex, independently of odor association. Cortex 2025; 186:74-85. [PMID: 40250310 DOI: 10.1016/j.cortex.2025.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2024] [Revised: 04/04/2025] [Accepted: 04/04/2025] [Indexed: 04/20/2025]
Abstract
Primary sensory cortices have been demonstrated to process sensory input from non-preferred sensory modalities, e.g., primary visual cortex reacting to auditory stimulation, bringing their presumed sensory specificity into question. Whether this reflects processing of the non-preferred stimulus per se or originates from cross-modal associations is debated. Visual/auditory objects typically have strong reciprocal associations; hence, it is difficult to address this question in these modalities. Here, we dissociate between the two competing hypotheses of whether this form of activation in primary cortices is caused by unisensory processing or cross-modal associations by turning to the olfactory system where cross-modal associations are generally weaker. Using unisensory visual and auditory objects with odor associations ranging from none to strong, we show that the posterior piriform cortex, an area known to process odor objects, is activated by both sounds and pictures of objects. Critically, this activation is independent of the objects' odor associations, thereby demonstrating that the activity is not due to cross-modal associations. Using a Floyd-Warshall algorithm, we further show that the amygdala mediate condition-relevant information between the posterior piriform cortex and both the auditory and visual object-oriented cortices. Importantly, we replicate past findings of clear crossmodal processing in the visual and auditory systems. Our study demonstrates processing of non-olfactory input in olfactory cortices that is independent of cross-modal associations and contributes to a more nuanced view of modality specificity in olfactory, auditory, and visual cortices.
Collapse
Affiliation(s)
- Evelina Thunell
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Psychological Sciences, Purdue University, West Lafayette, IN, USA.
| | - Moa Peter
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Behzad Iravani
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Neurology and Neurological Sciences, Stanford University, CA, USA.
| | - Danja K Porada
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Katharina Prenner
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Fahimeh Darki
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Johan N Lundström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Monell Chemical Senses Center, Philadelphia, PA, USA; Stockholm University Brain Imaging Centre, Stockholm University, Stockholm, Sweden.
| |
Collapse
|
4
|
Zhu M, Chen Y, Zheng J, Zhao P, Xia M, Tang Y, Wang F. Over-integration of visual network in major depressive disorder and its association with gene expression profiles. Transl Psychiatry 2025; 15:86. [PMID: 40097427 PMCID: PMC11914485 DOI: 10.1038/s41398-025-03265-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 01/06/2025] [Accepted: 01/28/2025] [Indexed: 03/19/2025] Open
Abstract
Major depressive disorder (MDD) is a common psychiatric condition associated with aberrant functional connectivity in large-scale brain networks. However, it is unclear how the network dysfunction is characterized by imbalance or derangement of network modular interaction in MDD patients and whether this disruption is associated with gene expression profiles. We included 262 MDD patients and 297 healthy controls, embarking on a comprehensive analysis of intrinsic brain activity using resting-state functional magnetic resonance imaging (R-fMRI). We assessed brain network integration by calculating the Participation Coefficient (PC) and conducted an analysis of intra- and inter-modular connections to reveal the dysconnectivity patterns underlying abnormal PC manifestations. Besides, we explored the potential relationship between the above graph theory measures and clinical symptoms severity in MDD. Finally, we sought to uncover the association between aberrant graph theory measures and postmortem gene expression data sourced from the Allen Human Brain Atlas (AHBA). Relative to the controls, alterations in systemic functional connectivity were observed in MDD patients. Specifically, increased PC within the bilateral visual network (VIS) was found, accompanied by elevated functional connectivities (FCs) between VIS and both higher-order networks and Limbic network (Limbic), contrasted by diminished FCs within the VIS and between the VIS and the sensorimotor network (SMN). The clinical correlations indicated positive associations between inter-VIS FCs and depression symptom, whereas negative correlations were noted between intra-VIS FCs with depression symptom and cognitive disfunction. The transcriptional profiles explained 21-23.5% variance of the altered brain network system dysconnectivity pattern, with the most correlated genes enriched in trans-synaptic signaling and ion transport regulation. These results highlight the modular connectome dysfunctions characteristic of MDD and its linkage with gene expression profiles and clinical symptomatology, providing insight into the neurobiological underpinnings and holding potential implications for clinical management and therapeutic interventions in MDD.
Collapse
Affiliation(s)
- Mingrui Zhu
- Department of Neurology, Liaoning Provincial People's Hospital, Shenyang, Liaoning, China
- Department of Psychiatry, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning, China
| | - Yifan Chen
- School of Public Health, Southeast University, Nanjing, China
- Early Intervention Unit, Department of Psychiatry, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Junjie Zheng
- Early Intervention Unit, Department of Psychiatry, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
- Functional Brain Imaging Institute of Nanjing Medical University, Nanjing, China
| | - Pengfei Zhao
- Early Intervention Unit, Department of Psychiatry, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
- Functional Brain Imaging Institute of Nanjing Medical University, Nanjing, China
| | - Mingrui Xia
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China.
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, P. R. China.
| | - Yanqing Tang
- Department of psychaitry, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China.
| | - Fei Wang
- Department of Psychiatry, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning, China.
- Early Intervention Unit, Department of Psychiatry, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China.
- Functional Brain Imaging Institute of Nanjing Medical University, Nanjing, China.
- Department of Mental Health, School of Public Health, Nanjing Medical University, Nanjing, China.
| |
Collapse
|
5
|
Chang S, Zhang X, Cao Y, Pearson J, Meng M. Imageless imagery in aphantasia revealed by early visual cortex decoding. Curr Biol 2025; 35:591-599.e4. [PMID: 39798565 DOI: 10.1016/j.cub.2024.12.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 11/06/2024] [Accepted: 12/04/2024] [Indexed: 01/15/2025]
Abstract
Activity in the early visual cortex is thought to tightly couple with conscious experience, including feedback-driven mental imagery. However, in aphantasia (a complete lack of visual imagery), the state of mental imagery, what takes its place, or how any activity relates to qualia remains unknown. This study analyzed univariate (amplitude) and multivariate (decoding) blood-oxygen-level-dependent (BOLD) signals in primary visual cortex during imagery attempts. "Imagery" content could be decoded equally well in both groups; however, unlike in those with imagery, neural signatures in those with validated aphantasia were ipsilateral and could not be cross-decoded with perceptual representations. Further, the perception-induced BOLD response was lower in those with aphantasia compared with controls. Together, these data suggest that an imagery-related representation, but with less or transformed sensory information, exists in the primary visual cortex of those with aphantasia. Our data challenge the classic view that activity in primary visual cortex should result in sensory qualia.
Collapse
Affiliation(s)
- Shuai Chang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, South China Normal University, Guangzhou 510631, China; State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Xinyu Zhang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Yangjianyi Cao
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Joel Pearson
- School of Psychology, University of New South Wales, Sydney, NSW 2052, Australia.
| | - Ming Meng
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, South China Normal University, Guangzhou 510631, China; Center for Studies of Psychological Application, Guangdong Key Laboratory of Mental Health and Cognitive Science, School of Psychology, South China Normal University, Guangzhou 510631, China.
| |
Collapse
|
6
|
Bidelman GM, York A, Pearson C. Neural correlates of phonetic categorization under auditory (phoneme) and visual (grapheme) modalities. Neuroscience 2025; 565:182-191. [PMID: 39631659 DOI: 10.1016/j.neuroscience.2024.11.079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Revised: 11/16/2024] [Accepted: 11/30/2024] [Indexed: 12/07/2024]
Abstract
This study assessed the neural mechanisms and relative saliency of categorization for speech sounds and comparable graphemes (i.e., visual letters) of the same phonetic label. Given that linguistic experience shapes categorical processing, and letter-speech sound matching plays a crucial role during early reading acquisition, we hypothesized sound phoneme and visual grapheme tokens representing the same linguistic identity might recruit common neural substrates, despite originating from different sensory modalities. Behavioral and neuroelectric brain responses (ERPs) were acquired as participants categorized stimuli from sound (phoneme) and homologous letter (grapheme) continua each spanning a /da/-/ga/ gradient. Behaviorally, listeners were faster and showed stronger categorization of phoneme compared to graphemes. At the neural level, multidimensional scaling of the EEG revealed responses self-organized in a categorial fashion such that tokens clustered within their respective modality beginning ∼150-250 ms after stimulus onset. Source-resolved ERPs further revealed modality-specific and overlapping brain regions supporting phonetic categorization. Left inferior frontal gyrus and auditory cortex showed stronger responses for sound category members compared to phonetically ambiguous tokens, whereas early visual cortices paralleled this categorical organization for graphemes. Auditory and visual categorization also recruited common visual association areas in extrastriate cortex but in opposite hemispheres (auditory = left; visual = right). Our findings reveal both auditory and visual sensory cortex supports categorical organization for phonetic labels within their respective modalities. However, a partial overlap in phoneme and grapheme processing among occipital brain areas implies the presence of an isomorphic, domain-general mapping for phonetic categories in dorsal visual system.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA; Cognitive Science Program, Indiana University, Bloomington, IN, USA.
| | - Ashleigh York
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Univeristy of Mississippi Medical Center, Jackson, MS, USA
| | - Claire Pearson
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| |
Collapse
|
7
|
Hu Y, Mohsenzadeh Y. Neural processing of naturalistic audiovisual events in space and time. Commun Biol 2025; 8:110. [PMID: 39843939 PMCID: PMC11754444 DOI: 10.1038/s42003-024-07434-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 12/19/2024] [Indexed: 01/24/2025] Open
Abstract
Our brain seamlessly integrates distinct sensory information to form a coherent percept. However, when real-world audiovisual events are perceived, the specific brain regions and timings for processing different levels of information remain less investigated. To address that, we curated naturalistic videos and recorded functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data when participants viewed videos with accompanying sounds. Our findings reveal early asymmetrical cross-modal interaction, with acoustic information represented in both early visual and auditory regions, while visual information only identified in visual cortices. The visual and auditory features were processed with similar onset but different temporal dynamics. High-level categorical and semantic information emerged in multisensory association areas later in time, indicating late cross-modal integration and its distinct role in converging conceptual information. Comparing neural representations to a two-branch deep neural network model highlighted the necessity of early cross-modal connections to build a biologically plausible model of audiovisual perception. With EEG-fMRI fusion, we provided a spatiotemporally resolved account of neural activity during the processing of naturalistic audiovisual stimuli.
Collapse
Affiliation(s)
- Yu Hu
- Western Institute for Neuroscience, Western University, London, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Yalda Mohsenzadeh
- Western Institute for Neuroscience, Western University, London, ON, Canada.
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada.
- Department of Computer Science, Western University, London, ON, Canada.
| |
Collapse
|
8
|
Phillips WA, Bachmann T, Spratling MW, Muckli L, Petro LS, Zolnik T. Cellular psychology: relating cognition to context-sensitive pyramidal cells. Trends Cogn Sci 2025; 29:28-40. [PMID: 39353837 DOI: 10.1016/j.tics.2024.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 09/05/2024] [Accepted: 09/06/2024] [Indexed: 10/04/2024]
Abstract
'Cellular psychology' is a new field of inquiry that studies dendritic mechanisms for adapting mental events to the current context, thus increasing their coherence, flexibility, effectiveness, and comprehensibility. Apical dendrites of neocortical pyramidal cells have a crucial role in cognition - those dendrites receive input from diverse sources, including feedback, and can amplify the cell's feedforward transmission if relevant in that context. Specialized subsets of inhibitory interneurons regulate this cooperative context-sensitive processing by increasing or decreasing amplification. Apical input has different effects on cellular output depending on whether we are awake, deeply asleep, or dreaming. Furthermore, wakeful thought and imagery may depend on apical input. High-resolution neuroimaging in humans supports and complements evidence on these cellular mechanisms from other mammals.
Collapse
Affiliation(s)
- William A Phillips
- Psychology, Faculty of Natural Sciences, University of Stirling, Stirling, FK9 4LA, UK.
| | - Talis Bachmann
- Institute of Psychology, University of Tartu, Tartu, Estonia.
| | - Michael W Spratling
- Department of Behavioral and Cognitive Sciences, University of Luxembourg, L-4366 Esch-Belval, Luxembourg
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, G12 8QB, UK; Imaging Centre of Excellence, College of Medical, Veterinary and Life Sciences, University of Glasgow and Queen Elizabeth University Hospital, Glasgow, UK
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, G12 8QB, UK; Imaging Centre of Excellence, College of Medical, Veterinary and Life Sciences, University of Glasgow and Queen Elizabeth University Hospital, Glasgow, UK
| | - Timothy Zolnik
- Department of Biochemistry, Charité Universitätsmedizin Berlin, Berlin 10117, Germany; Department of Biology, Humboldt Universität zu Berlin, Berlin 10117, Germany
| |
Collapse
|
9
|
Paasonen J, Valjakka JS, Salo RA, Paasonen E, Tanila H, Michaeli S, Mangia S, Gröhn O. Whisker stimulation with different frequencies reveals non-uniform modulation of functional magnetic resonance imaging signal across sensory systems in awake rats. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.13.623361. [PMID: 39605361 PMCID: PMC11601494 DOI: 10.1101/2024.11.13.623361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
Primary sensory systems are classically considered to be separate units, however there is current evidence that there are notable interactions between them. We examined the cross-sensory interplay by applying a quiet and motion-tolerant zero echo time functional magnetic resonance imaging (fMRI) technique to elucidate the evoked brain-wide responses to whisker pad stimulation in awake and anesthetized rats. Specifically, characterized the brain-wide responses in core and non-core regions to whisker pad stimulation by the varying stimulation-frequency, and determined whether isoflurane-medetomidine anesthesia, traditionally used in preclinical imaging, confounded investigations related to sensory integration. We demonstrated that unilateral whisker pad stimulation not only elicited robust activity along the whisker-mediated tactile system, but also in auditory, visual, high-order, and cerebellar regions, indicative of brain-wide cross-sensory and associative activity. By inspecting the response profiles to different stimulation frequencies and temporal signal characteristics, we observed that the non-core regions responded to stimulation in a very different way compared to the primary sensory system, likely reflecting different encoding modes between the primary sensory, cross-sensory, and integrative processing. Lastly, while the activity evoked in low-order sensory structures could be reliably detected under anesthesia, the activity in high-order processing and the complex differences between primary, cross-sensory, and associative systems were visible only in the awake state. We conclude that our study reveals novel aspects of the cross-sensory interplay of whisker-mediated tactile system, and importantly, that these would be difficult to observe in anesthetized rats.
Collapse
Affiliation(s)
- Jaakko Paasonen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Juha S. Valjakka
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, USA
| | - Raimo A. Salo
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Ekaterina Paasonen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
- NeuroCenter, Kuopio University Hospital, Kuopio, Finland
| | - Heikki Tanila
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Shalom Michaeli
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, USA
| | - Silvia Mangia
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, USA
| | - Olli Gröhn
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
10
|
Montabes de la Cruz BM, Abbatecola C, Luciani RS, Paton AT, Bergmann J, Vetter P, Petro LS, Muckli LF. Decoding sound content in the early visual cortex of aphantasic participants. Curr Biol 2024; 34:5083-5089.e3. [PMID: 39419030 DOI: 10.1016/j.cub.2024.09.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 08/21/2024] [Accepted: 09/04/2024] [Indexed: 10/19/2024]
Abstract
Listening to natural auditory scenes leads to distinct neuronal activity patterns in the early visual cortex (EVC) of blindfolded sighted and congenitally blind participants.1,2 This pattern of sound decoding is organized by eccentricity, with the accuracy of auditory information increasing from foveal to far peripheral retinotopic regions in the EVC (V1, V2, and V3). This functional organization by eccentricity is predicted by primate anatomical connectivity,3,4 where cortical feedback projections from auditory and other non-visual areas preferentially target the periphery of early visual areas. In congenitally blind participants, top-down feedback projections to the visual cortex proliferate,5 which might account for even higher sound-decoding accuracy in the EVC compared with blindfolded sighted participants.2 In contrast, studies in participants with aphantasia suggest an impairment of feedback projections to early visual areas, leading to a loss of visual imagery experience.6,7 This raises the question of whether impaired visual feedback pathways in aphantasia also reduce the transmission of auditory information to early visual areas. We presented auditory scenes to 23 blindfolded aphantasic participants. We found overall decreased sound decoding in early visual areas compared to blindfolded sighted ("control") and blind participants. We further explored this difference by modeling eccentricity effects across the blindfolded control, blind, and aphantasia datasets, and with a whole-brain searchlight analysis. Our findings suggest that the feedback of auditory content to the EVC is reduced in aphantasic participants. Reduced top-down projections might lead to both less sound decoding and reduced subjective experience of visual imagery.
Collapse
Affiliation(s)
- Belén M Montabes de la Cruz
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK
| | - Clement Abbatecola
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK
| | - Roberto S Luciani
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; School of Computing Science, College of Science and Engineering, University of Glasgow, Glasgow G12 8QQ, UK
| | - Angus T Paton
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK
| | - Johanna Bergmann
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1, Leipzig 04303, Germany
| | - Petra Vetter
- Visual & Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg 1700, Switzerland
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK
| | - Lars F Muckli
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK.
| |
Collapse
|
11
|
Cabbai G, Racey C, Simner J, Dance C, Ward J, Forster S. Sensory representations in primary visual cortex are not sufficient for subjective imagery. Curr Biol 2024; 34:5073-5082.e5. [PMID: 39419033 DOI: 10.1016/j.cub.2024.09.062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 08/10/2024] [Accepted: 09/23/2024] [Indexed: 10/19/2024]
Abstract
The contemporary definition of mental imagery is characterized by two aspects: a sensory representation that resembles, but does not result from, perception, and an associated subjective experience. Neuroimaging demonstrated imagery-related sensory representations in primary visual cortex (V1) that show striking parallels to perception. However, it remains unclear whether these representations always reflect subjective experience or if they can be dissociated from it. We addressed this question by comparing sensory representations and subjective imagery among visualizers and aphantasics, the latter with an impaired ability to experience imagery. Importantly, to test for the presence of sensory representations independently of the ability to generate imagery on demand, we examined both spontaneous and voluntary imagery forms. Using multivariate fMRI, we tested for decodable sensory representations in V1 and subjective visual imagery reports that occurred either spontaneously (during passive listening of evocative sounds) or in response to the instruction to voluntarily generate imagery of the sound content (always while blindfolded inside the scanner). Among aphantasics, V1 decoding of sound content was at chance during voluntary imagery, and lower than in visualizers, but it succeeded during passive listening, despite them reporting no imagery. In contrast, in visualizers, decoding accuracy in V1 was greater in voluntary than spontaneous imagery (while being positively associated with the reported vividness of both imagery types). Finally, for both conditions, decoding in precuneus was successful in visualizers but at chance for aphantasics. Together, our findings show that V1 representations can be dissociated from subjective imagery, while implicating a key role of precuneus in the latter.
Collapse
Affiliation(s)
- Giulia Cabbai
- School of Psychology, University of Sussex, Brighton BN1 9QH, UK; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9RH, UK.
| | - Chris Racey
- School of Psychology, University of Sussex, Brighton BN1 9QH, UK; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9RH, UK
| | - Julia Simner
- School of Psychology, University of Sussex, Brighton BN1 9QH, UK; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9RH, UK
| | - Carla Dance
- School of Psychology, University of Sussex, Brighton BN1 9QH, UK
| | - Jamie Ward
- School of Psychology, University of Sussex, Brighton BN1 9QH, UK; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9RH, UK
| | - Sophie Forster
- School of Psychology, University of Sussex, Brighton BN1 9QH, UK; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9RH, UK
| |
Collapse
|
12
|
Tsushima Y, Nakayama K, Okuya T, Koiwa H, Ando H, Watanabe Y. Brain activities in the auditory area and insula represent stimuli evoking emotional response. Sci Rep 2024; 14:21335. [PMID: 39266687 PMCID: PMC11393461 DOI: 10.1038/s41598-024-72112-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 09/03/2024] [Indexed: 09/14/2024] Open
Abstract
Cinema, a modern titan of entertainment, holds power to move people with the artful manipulation of auditory and visual stimuli. Despite this, the mechanisms behind how sensory stimuli elicit emotional responses are unknown. Thus, this study evaluated which brain regions were involved when sensory stimuli evoke auditory- or visual-driven emotions during film viewing. Using functional magnetic resonance imaging (fMRI) decoding techniques, we found that brain activities in the auditory area and insula represent the stimuli that evoke emotional response. The observation of brain activities in these regions could provide further insights to these mechanisms for the improvement of film-making, as well as the development of novel neural techniques in neuroscience. In near feature, such a "neuro-designed" products/ applications might gain in popularity.
Collapse
Affiliation(s)
- Yoshiaki Tsushima
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), 1-4 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Koharu Nakayama
- Faculty of Life and Medical Sciences, Doshisha University, 1-3 Tataramiyakodani, Kyotanabe, Kyoto, 610-0321, Japan
| | - Teruhisa Okuya
- Panasonic Holdings Corporation, 3-1-1 Yagumo-Naka-Machi, Moriguchi City, Osaka, 570-8501, Japan
| | - Hiroko Koiwa
- Electric Works Company, Panasonic Corporation, Kadoma, Osaka, 571-8686, Japan
| | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), 1-4 Yamadaoka, Suita, Osaka, 565-0871, Japan
- Universal Communication Research Institue, National Institute of Information and Communications Technology (NICT), 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan
| | - Yoshiaki Watanabe
- Faculty of Life and Medical Sciences, Doshisha University, 1-3 Tataramiyakodani, Kyotanabe, Kyoto, 610-0321, Japan
| |
Collapse
|
13
|
Hu J, Badde S, Vetter P. Auditory guidance of eye movements toward threat-related images in the absence of visual awareness. Front Hum Neurosci 2024; 18:1441915. [PMID: 39175660 PMCID: PMC11338778 DOI: 10.3389/fnhum.2024.1441915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 07/30/2024] [Indexed: 08/24/2024] Open
Abstract
The human brain is sensitive to threat-related information even when we are not aware of this information. For example, fearful faces attract gaze in the absence of visual awareness. Moreover, information in different sensory modalities interacts in the absence of awareness, for example, the detection of suppressed visual stimuli is facilitated by simultaneously presented congruent sounds or tactile stimuli. Here, we combined these two lines of research and investigated whether threat-related sounds could facilitate visual processing of threat-related images suppressed from awareness such that they attract eye gaze. We suppressed threat-related images of cars and neutral images of human hands from visual awareness using continuous flash suppression and tracked observers' eye movements while presenting congruent or incongruent sounds (finger snapping and car engine sounds). Indeed, threat-related car sounds guided the eyes toward suppressed car images, participants looked longer at the hidden car images than at any other part of the display. In contrast, neither congruent nor incongruent sounds had a significant effect on eye responses to suppressed finger images. Overall, our results suggest that only in a danger-related context semantically congruent sounds modulate eye movements to images suppressed from awareness, highlighting the prioritisation of eye responses to threat-related stimuli in the absence of visual awareness.
Collapse
Affiliation(s)
- Junchao Hu
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, MA, United States
| | - Petra Vetter
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
14
|
Chen Y, Beech P, Yin Z, Jia S, Zhang J, Yu Z, Liu JK. Decoding dynamic visual scenes across the brain hierarchy. PLoS Comput Biol 2024; 20:e1012297. [PMID: 39093861 PMCID: PMC11324145 DOI: 10.1371/journal.pcbi.1012297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 08/14/2024] [Accepted: 07/03/2024] [Indexed: 08/04/2024] Open
Abstract
Understanding the computational mechanisms that underlie the encoding and decoding of environmental stimuli is a crucial investigation in neuroscience. Central to this pursuit is the exploration of how the brain represents visual information across its hierarchical architecture. A prominent challenge resides in discerning the neural underpinnings of the processing of dynamic natural visual scenes. Although considerable research efforts have been made to characterize individual components of the visual pathway, a systematic understanding of the distinctive neural coding associated with visual stimuli, as they traverse this hierarchical landscape, remains elusive. In this study, we leverage the comprehensive Allen Visual Coding-Neuropixels dataset and utilize the capabilities of deep learning neural network models to study neural coding in response to dynamic natural visual scenes across an expansive array of brain regions. Our study reveals that our decoding model adeptly deciphers visual scenes from neural spiking patterns exhibited within each distinct brain area. A compelling observation arises from the comparative analysis of decoding performances, which manifests as a notable encoding proficiency within the visual cortex and subcortical nuclei, in contrast to a relatively reduced encoding activity within hippocampal neurons. Strikingly, our results unveil a robust correlation between our decoding metrics and well-established anatomical and functional hierarchy indexes. These findings corroborate existing knowledge in visual coding related to artificial visual stimuli and illuminate the functional role of these deeper brain regions using dynamic stimuli. Consequently, our results suggest a novel perspective on the utility of decoding neural network models as a metric for quantifying the encoding quality of dynamic natural visual scenes represented by neural responses, thereby advancing our comprehension of visual coding within the complex hierarchy of the brain.
Collapse
Affiliation(s)
- Ye Chen
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Peter Beech
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Ziwei Yin
- School of Computer Science, Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| | - Shanshan Jia
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Jiayi Zhang
- Institutes of Brain Science, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institute for Medical and Engineering Innovation, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Zhaofei Yu
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Jian K. Liu
- School of Computing, University of Leeds, Leeds, United Kingdom
- School of Computer Science, Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
15
|
Bidelman GM, York A, Pearson C. Neural correlates of phonetic categorization under auditory (phoneme) and visual (grapheme) modalities. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.24.604940. [PMID: 39211275 PMCID: PMC11361091 DOI: 10.1101/2024.07.24.604940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
We tested whether the neural mechanisms of phonetic categorization are specific to speech sounds or generalize to graphemes (i.e., visual letters) of the same phonetic label. Given that linguistic experience shapes categorical processing, and letter-speech sound matching plays a crucial role during early reading acquisition, we hypothesized sound phoneme and visual grapheme tokens representing the same linguistic identity might recruit common neural substrates, despite originating from different sensory modalities. Behavioral and neuroelectric brain responses (ERPs) were acquired as participants categorized stimuli from sound (phoneme) and homologous letter (grapheme) continua each spanning a /da/ - /ga/ gradient. Behaviorally, listeners were faster and showed stronger categorization of phoneme compared to graphemes. At the neural level, multidimensional scaling of the EEG revealed responses self-organized in a categorial fashion such that tokens clustered within their respective modality beginning ∼150-250 ms after stimulus onset. Source-resolved ERPs further revealed modality-specific and overlapping brain regions supporting phonetic categorization. Left inferior frontal gyrus and auditory cortex showed stronger responses for sound category members compared to phonetically ambiguous tokens, whereas early visual cortices paralleled this categorical organization for graphemes. Auditory and visual categorization also recruited common visual association areas in extrastriate cortex but in opposite hemispheres (auditory = left; visual=right). Our findings reveal both auditory and visual sensory cortex supports categorical organization for phonetic labels within their respective modalities. However, a partial overlap in phoneme and grapheme processing among occipital brain areas implies the presence of an isomorphic, domain-general mapping for phonetic categories in dorsal visual system.
Collapse
|
16
|
Bravo F, Glogowski J, Stamatakis EA, Herfert K. Dissonant music engages early visual processing. Proc Natl Acad Sci U S A 2024; 121:e2320378121. [PMID: 39008675 PMCID: PMC11287129 DOI: 10.1073/pnas.2320378121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 06/04/2024] [Indexed: 07/17/2024] Open
Abstract
The neuroscientific examination of music processing in audio-visual contexts offers a valuable framework to assess how auditory information influences the emotional encoding of visual information. Using fMRI during naturalistic film viewing, we investigated the neural mechanisms underlying the effect of music on valence inferences during mental state attribution. Thirty-eight participants watched the same short-film accompanied by systematically controlled consonant or dissonant music. Subjects were instructed to think about the main character's intentions. The results revealed that increasing levels of dissonance led to more negatively valenced inferences, displaying the profound emotional impact of musical dissonance. Crucially, at the neuroscientific level and despite music being the sole manipulation, dissonance evoked the response of the primary visual cortex (V1). Functional/effective connectivity analysis showed a stronger coupling between the auditory ventral stream (AVS) and V1 in response to tonal dissonance and demonstrated the modulation of early visual processing via top-down feedback inputs from the AVS to V1. These V1 signal changes indicate the influence of high-level contextual representations associated with tonal dissonance on early visual cortices, serving to facilitate the emotional interpretation of visual information. Our results highlight the significance of employing systematically controlled music, which can isolate emotional valence from the arousal dimension, to elucidate the brain's sound-to-meaning interface and its distributive crossmodal effects on early visual encoding during naturalistic film viewing.
Collapse
Affiliation(s)
- Fernando Bravo
- Department of Preclinical Imaging and Radiopharmacy, University of Tübingen, Tübingen72076, Germany
- Cognition and Consciousness Imaging Group, Division of Anaesthesia, Department of Medicine, University of Cambridge, Addenbrooke’s Hospital, CambridgeCB2 0SP, United Kingdom
- Department of Clinical Neurosciences, University of Cambridge, Addenbrooke’s Hospital, CambridgeCB2 0SP, United Kingdom
- Institut für Kunst- und Musikwissenschaft, Division of Musicology, Technische Universität Dresden, Dresden01219, Germany
| | - Jana Glogowski
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin12489, Germany
| | - Emmanuel Andreas Stamatakis
- Cognition and Consciousness Imaging Group, Division of Anaesthesia, Department of Medicine, University of Cambridge, Addenbrooke’s Hospital, CambridgeCB2 0SP, United Kingdom
- Department of Clinical Neurosciences, University of Cambridge, Addenbrooke’s Hospital, CambridgeCB2 0SP, United Kingdom
| | - Kristina Herfert
- Department of Preclinical Imaging and Radiopharmacy, University of Tübingen, Tübingen72076, Germany
| |
Collapse
|
17
|
Williams JR, Störmer VS. Cutting Through the Noise: Auditory Scenes and Their Effects on Visual Object Processing. Psychol Sci 2024; 35:814-824. [PMID: 38889285 DOI: 10.1177/09567976241237737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/20/2024] Open
Abstract
Despite the intuitive feeling that our visual experience is coherent and comprehensive, the world is full of ambiguous and indeterminate information. Here we explore how the visual system might take advantage of ambient sounds to resolve this ambiguity. Young adults (ns = 20-30) were tasked with identifying an object slowly fading in through visual noise while a task-irrelevant sound played. We found that participants demanded more visual information when the auditory object was incongruent with the visual object compared to when it was not. Auditory scenes, which are only probabilistically related to specific objects, produced similar facilitation even for unheard objects (e.g., a bench). Notably, these effects traverse categorical and specific auditory and visual-processing domains as participants performed across-category and within-category visual tasks, underscoring cross-modal integration across multiple levels of perceptual processing. To summarize, our study reveals the importance of audiovisual interactions to support meaningful perceptual experiences in naturalistic settings.
Collapse
Affiliation(s)
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego
- Department of Psychological and Brain Sciences, Dartmouth College
| |
Collapse
|
18
|
Aguado-López B, Palenciano AF, Peñalver JMG, Díaz-Gutiérrez P, López-García D, Avancini C, Ciria LF, Ruz M. Proactive selective attention across competition contexts. Cortex 2024; 176:113-128. [PMID: 38772050 DOI: 10.1016/j.cortex.2024.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 04/03/2024] [Accepted: 04/16/2024] [Indexed: 05/23/2024]
Abstract
Selective attention is a cognitive function that helps filter out unwanted information. Theories such as the biased competition model (Desimone & Duncan, 1995) explain how attentional templates bias processing towards targets in contexts where multiple stimuli compete for resources. However, it is unclear how the anticipation of different levels of competition influences the nature of attentional templates, in a proactive fashion. In this study, we used electroencephalography (EEG) to investigate how the anticipated demands of attentional selection (either high or low stimuli competition contexts) modulate target-specific preparatory brain activity and its relationship with task performance. To do so, participants performed a sex/gender judgment task in a cue-target paradigm where, depending on the block, target and distractor stimuli appeared simultaneously (high competition) or sequentially (low competition). Multivariate Pattern Analysis (MVPA) showed that, in both competition contexts, there was a preactivation of the target category to select, with a ramping-up profile at the end of the preparatory interval. However, cross-classification showed no generalization across competition conditions, suggesting different preparatory formats. Notably, time-frequency analyses showed differences between anticipated competition demands, with higher theta band power for high than low competition, which mediated the impact of subsequent stimuli competition on behavioral performance. Overall, our results show that, whereas preactivation of the internal templates associated with the category to select are engaged in advance in high and low competition contexts, their underlying neural patterns differ. In addition, these codes could not be associated with theta power, suggesting that they reflect different preparatory processes. The implications of these findings are crucial to increase our understanding of the nature of top-down processes across different contexts.
Collapse
Affiliation(s)
- Blanca Aguado-López
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada 18071, Spain
| | - Ana F Palenciano
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada 18071, Spain
| | - José M G Peñalver
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada 18071, Spain
| | - Paloma Díaz-Gutiérrez
- Department of Management, Faculty of Business and Economics, University of Antwerp, 2000, Belgium
| | - David López-García
- Data Science & Computational Intelligence Institute, University of Granada, CP 18071, Spain
| | - Chiara Avancini
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada 18071, Spain
| | - Luis F Ciria
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada 18071, Spain
| | - María Ruz
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada 18071, Spain.
| |
Collapse
|
19
|
Ueda R, Sakakura K, Mitsuhashi T, Sonoda M, Firestone E, Kuroda N, Kitazawa Y, Uda H, Luat AF, Johnson EL, Ofen N, Asano E. Cortical and white matter substrates supporting visuospatial working memory. Clin Neurophysiol 2024; 162:9-27. [PMID: 38552414 PMCID: PMC11102300 DOI: 10.1016/j.clinph.2024.03.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 02/24/2024] [Accepted: 03/11/2024] [Indexed: 05/19/2024]
Abstract
OBJECTIVE In tasks involving new visuospatial information, we rely on working memory, supported by a distributed brain network. We investigated the dynamic interplay between brain regions, including cortical and white matter structures, to understand how neural interactions change with different memory loads and trials, and their subsequent impact on working memory performance. METHODS Patients undertook a task of immediate spatial recall during intracranial EEG monitoring. We charted the dynamics of cortical high-gamma activity and associated functional connectivity modulations in white matter tracts. RESULTS Elevated memory loads were linked to enhanced functional connectivity via occipital longitudinal tracts, yet decreased through arcuate, uncinate, and superior-longitudinal fasciculi. As task familiarity grew, there was increased high-gamma activity in the posterior inferior-frontal gyrus (pIFG) and diminished functional connectivity across a network encompassing frontal, parietal, and temporal lobes. Early pIFG high-gamma activity was predictive of successful recall. Including this metric in a logistic regression model yielded an accuracy of 0.76. CONCLUSIONS Optimizing visuospatial working memory through practice is tied to early pIFG activation and decreased dependence on irrelevant neural pathways. SIGNIFICANCE This study expands our knowledge of human adaptation for visuospatial working memory, showing the spatiotemporal dynamics of cortical network modulations through white matter tracts.
Collapse
Affiliation(s)
- Riyo Ueda
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; National Center Hospital, National Center of Neurology and Psychiatry, Tokyo 1878551, Japan.
| | - Kazuki Sakakura
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Department of Neurosurgery, Rush University Medical Center, Chicago, Illinois 60612, USA; Department of Neurosurgery, University of Tsukuba, Tsukuba 3058575, Japan.
| | - Takumi Mitsuhashi
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Department of Neurosurgery, Juntendo University, School of Medicine, Tokyo 1138421, Japan.
| | - Masaki Sonoda
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Department of Neurosurgery, Yokohama City University, Yokohama 2360004, Japan.
| | - Ethan Firestone
- Department of Physiology, Wayne State University, Detroit, Michigan 48202, USA.
| | - Naoto Kuroda
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai 9808575, Japan.
| | - Yu Kitazawa
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Department of Neurology and Stroke Medicine, Yokohama City University, Yokohama 2360004, Japan.
| | - Hiroshi Uda
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Department of Neurosurgery, Osaka Metropolitan University Graduate School of Medicine, Osaka 5458585, Japan.
| | - Aimee F Luat
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Department of Neurology, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Department of Pediatrics, Central Michigan University, Mt. Pleasant, Michigan 48858, USA.
| | - Elizabeth L Johnson
- Departments of Medical Social Sciences, Pediatrics, and Psychology, Northwestern University, Chicago, Illinois 60611, USA.
| | - Noa Ofen
- Life-Span Cognitive Neuroscience Program, Institute of Gerontology and Merrill Palmer Skillman Institute, Wayne State University, Detroit, Michigan 48202, USA; Department of Psychology, Wayne State University, Detroit, Michigan 48202, USA.
| | - Eishi Asano
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Department of Neurology, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan 48201, USA; Translational Neuroscience Program, Wayne State University, Detroit, Michigan 48201, USA.
| |
Collapse
|
20
|
Kóbor A, Janacsek K, Hermann P, Zavecz Z, Varga V, Csépe V, Vidnyánszky Z, Kovács G, Nemeth D. Finding Pattern in the Noise: Persistent Implicit Statistical Knowledge Impacts the Processing of Unpredictable Stimuli. J Cogn Neurosci 2024; 36:1239-1264. [PMID: 38683699 DOI: 10.1162/jocn_a_02173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Humans can extract statistical regularities of the environment to predict upcoming events. Previous research recognized that implicitly acquired statistical knowledge remained persistent and continued to influence behavior even when the regularities were no longer present in the environment. Here, in an fMRI experiment, we investigated how the persistence of statistical knowledge is represented in the brain. Participants (n = 32) completed a visual, four-choice, RT task consisting of statistical regularities. Two types of blocks constantly alternated with one another throughout the task: predictable statistical regularities in one block type and unpredictable ones in the other. Participants were unaware of the statistical regularities and their changing distribution across the blocks. Yet, they acquired the statistical regularities and showed significant statistical knowledge at the behavioral level not only in the predictable blocks but also in the unpredictable ones, albeit to a smaller extent. Brain activity in a range of cortical and subcortical areas, including early visual cortex, the insula, the right inferior frontal gyrus, and the right globus pallidus/putamen contributed to the acquisition of statistical regularities. The right insula, inferior frontal gyrus, and hippocampus as well as the bilateral angular gyrus seemed to play a role in maintaining this statistical knowledge. The results altogether suggest that statistical knowledge could be exploited in a relevant, predictable context as well as transmitted to and retrieved in an irrelevant context without a predictable structure.
Collapse
Affiliation(s)
- Andrea Kóbor
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
| | - Karolina Janacsek
- Centre of Thinking and Learning, Institute for Lifecourse Development, School of Human Sciences, University of Greenwich, United Kingdom
- ELTE Eötvös Loránd University, Hungary
| | - Petra Hermann
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
| | | | - Vera Varga
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
- University of Pannonia, Hungary
| | - Valéria Csépe
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
- University of Pannonia, Hungary
| | - Zoltán Vidnyánszky
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
| | | | - Dezso Nemeth
- INSERM, CRNL U1028 UMR5292, France
- ELTE Eötvös Loránd University & HUN-REN Research Centre for Natural Sciences, Hungary
- University of Atlántico Medio, Spain
| |
Collapse
|
21
|
Mares I, Smith FW, Goddard EJ, Keighery L, Pappasava M, Ewing L, Smith ML. Effects of expectation on face perception and its association with expertise. Sci Rep 2024; 14:9402. [PMID: 38658575 PMCID: PMC11043383 DOI: 10.1038/s41598-024-59284-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 04/09/2024] [Indexed: 04/26/2024] Open
Abstract
Perceptual decisions are derived from the combination of priors and sensorial input. While priors are broadly understood to reflect experience/expertise developed over one's lifetime, the role of perceptual expertise at the individual level has seldom been directly explored. Here, we manipulate probabilistic information associated with a high and low expertise category (faces and cars respectively), while assessing individual level of expertise with each category. 67 participants learned the probabilistic association between a color cue and each target category (face/car) in a behavioural categorization task. Neural activity (EEG) was then recorded in a similar paradigm in the same participants featuring the previously learned contingencies without the explicit task. Behaviourally, perception of the higher expertise category (faces) was modulated by expectation. Specifically, we observed facilitatory and interference effects when targets were correctly or incorrectly expected, which were also associated with independently measured individual levels of face expertise. Multivariate pattern analysis of the EEG signal revealed clear effects of expectation from 100 ms post stimulus, with significant decoding of the neural response to expected vs. not stimuli, when viewing identical images. Latency of peak decoding when participants saw faces was directly associated with individual level facilitation effects in the behavioural task. The current results not only provide time sensitive evidence of expectation effects on early perception but highlight the role of higher-level expertise on forming priors.
Collapse
Affiliation(s)
- Inês Mares
- School of Psychological Sciences, Birkbeck College, University of London, London, UK.
- William James Center for Research, Ispa - Instituto Universitário, Lisbon, Portugal.
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK
| | - E J Goddard
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| | - Lianne Keighery
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
- Department of Clinical and Movement Neurosciences, Queen Square Institute of Neurology, University College London, London, UK
| | - Michael Pappasava
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
- Centre for Genomics and Child Health, Blizard Institute, Queen Mary University of London, London, UK
| | - Louise Ewing
- School of Psychology, University of East Anglia, Norwich, UK
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, UK
| |
Collapse
|
22
|
Li AY, Ladyka-Wojcik N, Qazilbash H, Golestani A, Walther DB, Martin CB, Barense MD. Experience transforms crossmodal object representations in the anterior temporal lobes. eLife 2024; 13:e83382. [PMID: 38647143 PMCID: PMC11081630 DOI: 10.7554/elife.83382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 04/19/2024] [Indexed: 04/25/2024] Open
Abstract
Combining information from multiple senses is essential to object recognition, core to the ability to learn concepts, make new inferences, and generalize across distinct entities. Yet how the mind combines sensory input into coherent crossmodal representations - the crossmodal binding problem - remains poorly understood. Here, we applied multi-echo fMRI across a 4-day paradigm, in which participants learned three-dimensional crossmodal representations created from well-characterized unimodal visual shape and sound features. Our novel paradigm decoupled the learned crossmodal object representations from their baseline unimodal shapes and sounds, thus allowing us to track the emergence of crossmodal object representations as they were learned by healthy adults. Critically, we found that two anterior temporal lobe structures - temporal pole and perirhinal cortex - differentiated learned from non-learned crossmodal objects, even when controlling for the unimodal features that composed those objects. These results provide evidence for integrated crossmodal object representations in the anterior temporal lobes that were different from the representations for the unimodal features. Furthermore, we found that perirhinal cortex representations were by default biased toward visual shape, but this initial visual bias was attenuated by crossmodal learning. Thus, crossmodal learning transformed perirhinal representations such that they were no longer predominantly grounded in the visual modality, which may be a mechanism by which object concepts gain their abstraction.
Collapse
Affiliation(s)
- Aedan Yue Li
- Department of Psychology, University of TorontoTorontoCanada
| | | | - Heba Qazilbash
- Department of Psychology, University of TorontoTorontoCanada
| | - Ali Golestani
- Department of Physics and Astronomy, University of CalgaryCalgaryCanada
| | - Dirk B Walther
- Department of Psychology, University of TorontoTorontoCanada
- Rotman Research Institute, Baycrest Health SciencesNorth YorkCanada
| | - Chris B Martin
- Department of Psychology, Florida State UniversityTallahasseeUnited States
| | - Morgan D Barense
- Department of Psychology, University of TorontoTorontoCanada
- Rotman Research Institute, Baycrest Health SciencesNorth YorkCanada
| |
Collapse
|
23
|
Mazo C, Baeta M, Petreanu L. Auditory cortex conveys non-topographic sound localization signals to visual cortex. Nat Commun 2024; 15:3116. [PMID: 38600132 PMCID: PMC11006897 DOI: 10.1038/s41467-024-47546-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 04/02/2024] [Indexed: 04/12/2024] Open
Abstract
Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.
Collapse
Affiliation(s)
- Camille Mazo
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal.
| | - Margarida Baeta
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
| | - Leopoldo Petreanu
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal.
| |
Collapse
|
24
|
Dawes AJ, Keogh R, Pearson J. Multisensory subtypes of aphantasia: Mental imagery as supramodal perception in reverse. Neurosci Res 2024; 201:50-59. [PMID: 38029861 DOI: 10.1016/j.neures.2023.11.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 11/20/2023] [Indexed: 12/01/2023]
Abstract
Cognitive neuroscience research on mental imagery has largely focused on the visual imagery modality in unimodal task contexts. Recent studies have uncovered striking individual differences in visual imagery capacity, with some individuals reporting a subjective absence of conscious visual imagery ability altogether ("aphantasia"). However, naturalistic mental imagery is often multi-sensory, and preliminary findings suggest that many individuals with aphantasia also report a subjective lack of mental imagery in other sensory domains (such as auditory or olfactory imagery). In this paper, we perform a series of cluster analyses on the multi-sensory imagery questionnaire scores of two large groups of aphantasic subjects, defining latent sub-groups in this sample population. We demonstrate that aphantasia is a heterogenous phenomenon characterised by dominant sub-groups of individuals with visual aphantasia (those who report selective visual imagery absence) and multi-sensory aphantasia (those who report an inability to generate conscious mental imagery in any sensory modality). We replicate our findings in a second large sample and show that more unique aphantasia sub-types also exist, such as individuals with selectively preserved mental imagery in only one sensory modality (e.g. intact auditory imagery). We outline the implications of our findings for network theories of mental imagery, discussing how unique aphantasia aetiologies with distinct self-report patterns might reveal alterations to various levels of the sensory processing hierarchy implicated in mental imagery.
Collapse
Affiliation(s)
| | - Rebecca Keogh
- School of Psychological Sciences, Macquarie University, Sydney, Australia
| | - Joel Pearson
- School of Psychology, University of New South Wales, Sydney, Australia
| |
Collapse
|
25
|
Lee J, Park S. Multi-modal Representation of the Size of Space in the Human Brain. J Cogn Neurosci 2024; 36:340-361. [PMID: 38010320 DOI: 10.1162/jocn_a_02092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes the geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small- and large-sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberations. By using a multivoxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory-specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that angular gyrus and the right medial frontal gyrus had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory-specific regions and modality-integrated regions increases in the multimodal condition compared with single modality conditions. Our results suggest that spatial size perception relies on both sensory-specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
|
26
|
Bola Ł, Vetter P, Wenger M, Amedi A. Decoding Reach Direction in Early "Visual" Cortex of Congenitally Blind Individuals. J Neurosci 2023; 43:7868-7878. [PMID: 37783506 PMCID: PMC10648511 DOI: 10.1523/jneurosci.0376-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 08/22/2023] [Accepted: 08/26/2023] [Indexed: 10/04/2023] Open
Abstract
Motor actions, such as reaching or grasping, can be decoded from fMRI activity of early visual cortex (EVC) in sighted humans. This effect can depend on vision or visual imagery, or alternatively, could be driven by mechanisms independent of visual experience. Here, we show that the actions of reaching in different directions can be reliably decoded from fMRI activity of EVC in congenitally blind humans (both sexes). Thus, neither visual experience nor visual imagery is necessary for EVC to represent action-related information. We also demonstrate that, within EVC of blind humans, the accuracy of reach direction decoding is highest in areas typically representing foveal vision and gradually decreases in areas typically representing peripheral vision. We propose that this might indicate the existence of a predictive, hard-wired mechanism of aligning action and visual spaces. This mechanism might send action-related information primarily to the high-resolution foveal visual areas, which are critical for guiding and online correction of motor actions. Finally, we show that, beyond EVC, the decoding of reach direction in blind humans is most accurate in dorsal stream areas known to be critical for visuo-spatial and visuo-motor integration in the sighted. Thus, these areas can develop space and action representations even in the lifelong absence of vision. Overall, our findings in congenitally blind humans match previous research on the action system in the sighted, and suggest that the development of action representations in the human brain might be largely independent of visual experience.SIGNIFICANCE STATEMENT Early visual cortex (EVC) was traditionally thought to process only visual signals from the retina. Recent studies proved this account incomplete, and showed EVC involvement in many activities not directly related to incoming visual information, such as memory, sound, or action processing. Is EVC involved in these activities because of visual imagery? Here, we show robust reach direction representation in EVC of humans born blind. This demonstrates that EVC can represent actions independently of vision and visual imagery. Beyond EVC, we found that reach direction representation in blind humans is strongest in dorsal brain areas, critical for action processing in the sighted. This suggests that the development of action representations in the human brain is largely independent of visual experience.
Collapse
Affiliation(s)
- Łukasz Bola
- Institute of Psychology, Polish Academy of Sciences, Warsaw, 00-378, Poland
| | - Petra Vetter
- Visual & Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg, 1700, Switzerland
| | - Mohr Wenger
- Department of Medical Neurobiology, Faculty of Medicine, Hebrew University Jerusalem, Jerusalem, Israel, 91120
| | - Amir Amedi
- Department of Medical Neurobiology, Faculty of Medicine, Hebrew University Jerusalem, Jerusalem, Israel, 91120
- Baruch Ivcher Institute for Brain, Cognition & Technology, Baruch Ivcher School of Psychology, Reichman University, Interdisciplinary Center Herzliya, Herzliya, Israel, 461010
| |
Collapse
|
27
|
Chen L, Cichy RM, Kaiser D. Alpha-frequency feedback to early visual cortex orchestrates coherent naturalistic vision. SCIENCE ADVANCES 2023; 9:eadi2321. [PMID: 37948520 PMCID: PMC10637741 DOI: 10.1126/sciadv.adi2321] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 10/12/2023] [Indexed: 11/12/2023]
Abstract
During naturalistic vision, the brain generates coherent percepts by integrating sensory inputs scattered across the visual field. Here, we asked whether this integration process is mediated by rhythmic cortical feedback. In electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) experiments, we experimentally manipulated integrative processing by changing the spatiotemporal coherence of naturalistic videos presented across visual hemifields. Our EEG data revealed that information about incoherent videos is coded in feedforward-related gamma activity while information about coherent videos is coded in feedback-related alpha activity, indicating that integration is indeed mediated by rhythmic activity. Our fMRI data identified scene-selective cortex and human middle temporal complex (hMT) as likely sources of this feedback. Analytically combining our EEG and fMRI data further revealed that feedback-related representations in the alpha band shape the earliest stages of visual processing in cortex. Together, our findings indicate that the construction of coherent visual experiences relies on cortical feedback rhythms that fully traverse the visual hierarchy.
Collapse
Affiliation(s)
- Lixiang Chen
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Radoslaw M. Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Gießen 35392, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg 35032, Germany
| |
Collapse
|
28
|
Bai Y, Liu S, Zhu M, Wang B, Li S, Meng L, Shi X, Chen F, Jiang H, Jiang C. Perceptual Pattern of Cleft-Related Speech: A Task-fMRI Study on Typical Mandarin-Speaking Adults. Brain Sci 2023; 13:1506. [PMID: 38002467 PMCID: PMC10669275 DOI: 10.3390/brainsci13111506] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/30/2023] [Accepted: 10/17/2023] [Indexed: 11/26/2023] Open
Abstract
Congenital cleft lip and palate is one of the common deformities in the craniomaxillofacial region. The current study aimed to explore the perceptual pattern of cleft-related speech produced by Mandarin-speaking patients with repaired cleft palate using the task-based functional magnetic resonance imaging (task-fMRI) technique. Three blocks of speech stimuli, including hypernasal speech, the glottal stop, and typical speech, were played to 30 typical adult listeners with no history of cleft palate speech exploration. Using a randomized block design paradigm, the participants were instructed to assess the intelligibility of the stimuli. Simultaneously, fMRI data were collected. Brain activation was compared among the three types of speech stimuli. Results revealed that greater blood-oxygen-level-dependent (BOLD) responses to the cleft-related glottal stop than to typical speech were localized in the right fusiform gyrus and the left inferior occipital gyrus. The regions responding to the contrast between the glottal stop and cleft-related hypernasal speech were located in the right fusiform gyrus. More significant BOLD responses to hypernasal speech than to the glottal stop were localized in the left orbital part of the inferior frontal gyrus and middle temporal gyrus. More significant BOLD responses to typical speech than to the glottal stop were localized in the left inferior temporal gyrus, left superior temporal gyrus, left medial superior frontal gyrus, and right angular gyrus. Furthermore, there was no significant difference between hypernasal speech and typical speech. In conclusion, the typical listener would initiate different neural processes to perceive cleft-related speech. Our findings lay a foundation for exploring the perceptual pattern of patients with repaired cleft palate.
Collapse
Affiliation(s)
- Yun Bai
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Shaowei Liu
- Department of Radiology, Jiangsu Province Hospital of Chinese Medicine, Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing 210004, China
| | - Mengxian Zhu
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Binbing Wang
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Sheng Li
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Liping Meng
- Department of Children’s Healthcare, Women’s Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing 210004, China
| | - Xinghui Shi
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Hongbing Jiang
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Chenghui Jiang
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| |
Collapse
|
29
|
Landelle C, Caron-Guyon J, Nazarian B, Anton J, Sein J, Pruvost L, Amberg M, Giraud F, Félician O, Danna J, Kavounoudias A. Beyond sense-specific processing: decoding texture in the brain from touch and sonified movement. iScience 2023; 26:107965. [PMID: 37810223 PMCID: PMC10551894 DOI: 10.1016/j.isci.2023.107965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 07/08/2023] [Accepted: 09/15/2023] [Indexed: 10/10/2023] Open
Abstract
Texture, a fundamental object attribute, is perceived through multisensory information including touch and auditory cues. Coherent perceptions may rely on shared texture representations across different senses in the brain. To test this hypothesis, we delivered haptic textures coupled with a sound synthesizer to generate real-time textural sounds. Participants completed roughness estimation tasks with haptic, auditory, or bimodal cues in an MRI scanner. Somatosensory, auditory, and visual cortices were all activated during haptic and auditory exploration, challenging the traditional view that primary sensory cortices are sense-specific. Furthermore, audio-tactile integration was found in secondary somatosensory (S2) and primary auditory cortices. Multivariate analyses revealed shared spatial activity patterns in primary motor and somatosensory cortices, for discriminating texture across both modalities. This study indicates that primary areas and S2 have a versatile representation of multisensory textures, which has significant implications for how the brain processes multisensory cues to interact more efficiently with our environment.
Collapse
Affiliation(s)
- C. Landelle
- McGill University, McConnell Brain Imaging Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, Montreal, QC, Canada
- Aix-Marseille Université, CNRS, Laboratoire de Neurosciences Cognitives, LNC UMR 7291, Marseille, France
| | - J. Caron-Guyon
- Aix-Marseille Université, CNRS, Laboratoire de Neurosciences Cognitives, LNC UMR 7291, Marseille, France
- University of Louvain, Institute for Research in Psychology (IPSY) & Institute of Neuroscience (IoNS), Louvain Bionics Center, Crossmodal Perception and Plasticity Laboratory, Louvain-la-Neuve, Belgium
| | - B. Nazarian
- Aix-Marseille Université, CNRS, Centre IRM-INT@CERIMED, Institut de Neurosciences de la Timone, INT UMR 7289, Marseille, France
| | - J.L. Anton
- Aix-Marseille Université, CNRS, Centre IRM-INT@CERIMED, Institut de Neurosciences de la Timone, INT UMR 7289, Marseille, France
| | - J. Sein
- Aix-Marseille Université, CNRS, Centre IRM-INT@CERIMED, Institut de Neurosciences de la Timone, INT UMR 7289, Marseille, France
| | - L. Pruvost
- Aix-Marseille Université, CNRS, Perception, Représentations, Image, Son, Musique, PRISM UMR 7061, Marseille, France
| | - M. Amberg
- Université Lille, Laboratoire d'Electrotechnique et d'Electronique de Puissance, EA 2697-L2EP, Lille, France
| | - F. Giraud
- Université Lille, Laboratoire d'Electrotechnique et d'Electronique de Puissance, EA 2697-L2EP, Lille, France
| | - O. Félician
- Aix Marseille Université, INSERM, Institut des Neurosciences des Systèmes, INS UMR 1106, Marseille, France
| | - J. Danna
- Aix-Marseille Université, CNRS, Laboratoire de Neurosciences Cognitives, LNC UMR 7291, Marseille, France
- Université de Toulouse, CNRS, Laboratoire Cognition, Langues, Langage, Ergonomie, CLLE UMR5263, Toulouse, France
| | - A. Kavounoudias
- Aix-Marseille Université, CNRS, Laboratoire de Neurosciences Cognitives, LNC UMR 7291, Marseille, France
| |
Collapse
|
30
|
Sandhaeger F, Siegel M. Testing the generalization of neural representations. Neuroimage 2023; 278:120258. [PMID: 37429371 PMCID: PMC10443234 DOI: 10.1016/j.neuroimage.2023.120258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 05/27/2023] [Accepted: 06/28/2023] [Indexed: 07/12/2023] Open
Abstract
Multivariate analysis methods are widely used in neuroscience to investigate the presence and structure of neural representations. Representational similarities across time or contexts are often investigated using pattern generalization, e.g. by training and testing multivariate decoders in different contexts, or by comparable pattern-based encoding methods. It is however unclear what conclusions can be validly drawn on the underlying neural representations when significant pattern generalization is found in mass signals such as LFP, EEG, MEG, or fMRI. Using simulations, we show how signal mixing and dependencies between measurements can drive significant pattern generalization even though the true underlying representations are orthogonal. We suggest that, using an accurate estimate of the expected pattern generalization given identical representations, it is nonetheless possible to test meaningful hypotheses about the generalization of neural representations. We offer such an estimate of the expected magnitude of pattern generalization and demonstrate how this measure can be used to assess the similarity and differences of neural representations across time and contexts.
Collapse
Affiliation(s)
- Florian Sandhaeger
- Department of Neural Dynamics and Magnetoencephalography, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany; Centre for Integrative Neuroscience, University of Tübingen, Germany; MEG Center, University of Tübingen, Germany; IMPRS for Cognitive and Systems Neuroscience, University of Tübingen, Germany.
| | - Markus Siegel
- Department of Neural Dynamics and Magnetoencephalography, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany; Centre for Integrative Neuroscience, University of Tübingen, Germany; MEG Center, University of Tübingen, Germany.
| |
Collapse
|
31
|
Seydell-Greenwald A, Wang X, Newport EL, Bi Y, Striem-Amit E. Spoken language processing activates the primary visual cortex. PLoS One 2023; 18:e0289671. [PMID: 37566582 PMCID: PMC10420367 DOI: 10.1371/journal.pone.0289671] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Primary visual cortex (V1) is generally thought of as a low-level sensory area that primarily processes basic visual features. Although there is evidence for multisensory effects on its activity, these are typically found for the processing of simple sounds and their properties, for example spatially or temporally-congruent simple sounds. However, in congenitally blind individuals, V1 is involved in language processing, with no evidence of major changes in anatomical connectivity that could explain this seemingly drastic functional change. This is at odds with current accounts of neural plasticity, which emphasize the role of connectivity and conserved function in determining a neural tissue's role even after atypical early experiences. To reconcile what appears to be unprecedented functional reorganization with known accounts of plasticity limitations, we tested whether V1's multisensory roles include responses to spoken language in sighted individuals. Using fMRI, we found that V1 in normally sighted individuals was indeed activated by comprehensible spoken sentences as compared to an incomprehensible reversed speech control condition, and more strongly so in the left compared to the right hemisphere. Activation in V1 for language was also significant and comparable for abstract and concrete words, suggesting it was not driven by visual imagery. Last, this activation did not stem from increased attention to the auditory onset of words, nor was it correlated with attentional arousal ratings, making general attention accounts an unlikely explanation. Together these findings suggest that V1 responds to spoken language even in sighted individuals, reflecting the binding of multisensory high-level signals, potentially to predict visual input. This capability might be the basis for the strong V1 language activation observed in people born blind, re-affirming the notion that plasticity is guided by pre-existing connectivity and abilities in the typically developed brain.
Collapse
Affiliation(s)
- Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Elissa L. Newport
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ella Striem-Amit
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
32
|
Lee J, Park S. Multi-modal representation of the size of space in the human brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.24.550343. [PMID: 37546991 PMCID: PMC10402083 DOI: 10.1101/2023.07.24.550343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small and large sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberation. By using a multi-voxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that AG and the right IFG pars opercularis had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory specific regions and modality-integrated regions increase in the multimodal condition compared to single modality conditions. Our results suggest that the spatial size perception relies on both sensory specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
Affiliation(s)
- Jaeeun Lee
- Department of Psychology, University of Minnesota, Minneapolis, MN
| | - Soojin Park
- Department of Psychology, Yonsei University, Seoul, South Korea
| |
Collapse
|
33
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
34
|
Liu S, You B, Zhang X, Shaw A, Chen H, Jackson T. Individual Differences in Pain Catastrophizing and Regional Gray Matter Volume Among Community-dwelling Adults With Chronic Pain: A Voxel-based Morphology Study. Clin J Pain 2023; 39:209-216. [PMID: 36920221 DOI: 10.1097/ajp.0000000000001103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 02/01/2023] [Indexed: 03/16/2023]
Abstract
OBJECTIVES Elevations in pain catastrophizing (PC) are associated with more severe pain, emotional distress, and impairment within samples with chronic pain. However, brain structure correlates underlying individual differences in PC are not well understood and predict more severe pain and impairment within samples with chronic pain. This study assessed links between regional gray matter volume (GMV) and individual differences in PC within a large mixed chronic pain sample. MATERIALS AND METHODS Chinese adult community dwellers with chronic pain of at least 3 months duration (101 women and 59 men) completed self-report measures of background characteristics, pain severity, depression, and a widely validated PC questionnaire as well as a structural magnetic resonance imagining scan featuring voxel-based morphology to assess regional GMV correlates of PC. RESULTS After controlling for demographic correlates of PC, pain severity, and depression, higher PC scores had a significant, unique association with lower GMV levels in the inferior temporal area of the right fusiform gyrus, a region previously implicated in emotion regulation. DISCUSSION GMV deficits, particularly in right temporal-occipital emotion regulation regions, correspond to high levels of PC among individuals with chronic pain.
Collapse
Affiliation(s)
- Shuyang Liu
- School of Psychology, Southwest University, Chongqing
| | - BeiBei You
- School of Nursing, Guizhou Medical University, Guizhou
| | - Xin Zhang
- School of Psychology, Southwest University, Chongqing
| | - Amy Shaw
- Department of Psychology, University of Macau, Taipa, Macau, S.A.R., China
| | - Hong Chen
- School of Psychology, Southwest University, Chongqing
| | - Todd Jackson
- Department of Psychology, University of Macau, Taipa, Macau, S.A.R., China
| |
Collapse
|
35
|
Avery JA, Carrington M, Martin A. A common neural code for representing imagined and inferred tastes. Prog Neurobiol 2023; 223:102423. [PMID: 36805499 PMCID: PMC10040442 DOI: 10.1016/j.pneurobio.2023.102423] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 01/11/2023] [Accepted: 02/15/2023] [Indexed: 02/18/2023]
Abstract
Inferences about the taste of foods are a key aspect of our everyday experience of food choice. Despite this, gustatory mental imagery is a relatively under-studied aspect of our mental lives. In the present study, we examined subjects during high-field fMRI as they actively imagined basic tastes and subsequently viewed pictures of foods dominant in those specific taste qualities. Imagined tastes elicited activity in the bilateral dorsal mid-insula, one of the primary cortical regions responsive to the experience of taste. In addition, within this region we reliably decoded imagined tastes according to their dominant quality - sweet, sour, or salty - thus indicating that, like actual taste, imagined taste activates distinct quality-specific neural patterns. Using a cross-task decoding analysis, we found that the neural patterns for imagined tastes and food pictures in the mid-insula were reliably similar and quality-specific, suggesting a common code for representing taste quality regardless of whether explicitly imagined or automatically inferred when viewing food. These findings have important implications for our understanding of the mechanisms of mental imagery and the multimodal nature of presumably primary sensory brain regions like the dorsal mid-insula.
Collapse
Affiliation(s)
- Jason A Avery
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, United States.
| | - Madeline Carrington
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, United States
| | - Alex Martin
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, United States
| |
Collapse
|
36
|
Sciortino P, Kayser C. Steady state visual evoked potentials reveal a signature of the pitch-size crossmodal association in visual cortex. Neuroimage 2023; 273:120093. [PMID: 37028733 DOI: 10.1016/j.neuroimage.2023.120093] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/31/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Crossmodal correspondences describe our tendency to associate sensory features from different modalities with each other, such as the pitch of a sound with the size of a visual object. While such crossmodal correspondences (or associations) are described in many behavioural studies their neurophysiological correlates remain unclear. Under the current working model of multisensory perception both a low- and a high-level account seem plausible. That is, the neurophysiological processes shaping these associations could commence in low-level sensory regions, or may predominantly emerge in high-level association regions of semantic and object identification networks. We exploited steady-state visual evoked potentials (SSVEP) to directly probe this question, focusing on the associations between pitch and the visual features of size, hue or chromatic saturation. We found that SSVEPs over occipital regions are sensitive to the congruency between pitch and size, and a source analysis pointed to an origin around primary visual cortices. We speculate that this signature of the pitch-size association in low-level visual cortices reflects the successful pairing of congruent visual and acoustic object properties and may contribute to establishing causal relations between multisensory objects. Besides this, our study also provides a paradigm can be exploited to study other crossmodal associations involving visual stimuli in the future.
Collapse
|
37
|
Westlin C, Theriault JE, Katsumi Y, Nieto-Castanon A, Kucyi A, Ruf SF, Brown SM, Pavel M, Erdogmus D, Brooks DH, Quigley KS, Whitfield-Gabrieli S, Barrett LF. Improving the study of brain-behavior relationships by revisiting basic assumptions. Trends Cogn Sci 2023; 27:246-257. [PMID: 36739181 PMCID: PMC10012342 DOI: 10.1016/j.tics.2022.12.015] [Citation(s) in RCA: 58] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 02/05/2023]
Abstract
Neuroimaging research has been at the forefront of concerns regarding the failure of experimental findings to replicate. In the study of brain-behavior relationships, past failures to find replicable and robust effects have been attributed to methodological shortcomings. Methodological rigor is important, but there are other overlooked possibilities: most published studies share three foundational assumptions, often implicitly, that may be faulty. In this paper, we consider the empirical evidence from human brain imaging and the study of non-human animals that calls each foundational assumption into question. We then consider the opportunities for a robust science of brain-behavior relationships that await if scientists ground their research efforts in revised assumptions supported by current empirical evidence.
Collapse
Affiliation(s)
| | - Jordan E Theriault
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Yuta Katsumi
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Alfonso Nieto-Castanon
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Aaron Kucyi
- Department of Psychological and Brain Sciences, Drexel University, Philadelphia, PA, USA
| | - Sebastian F Ruf
- Department of Civil and Environmental Engineering, Northeastern University, Boston, MA, USA
| | - Sarah M Brown
- Department of Computer Science and Statistics, University of Rhode Island, Kingston, RI, USA
| | - Misha Pavel
- Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA; Bouvé College of Health Sciences, Northeastern University, Boston, MA, USA
| | - Deniz Erdogmus
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA
| | - Dana H Brooks
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA
| | - Karen S Quigley
- Department of Psychology, Northeastern University, Boston, MA, USA
| | | | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, Boston, MA, USA; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
38
|
Top-down specific preparatory activations for selective attention and perceptual expectations. Neuroimage 2023; 271:119960. [PMID: 36854351 DOI: 10.1016/j.neuroimage.2023.119960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 03/01/2023] Open
Abstract
Proactive cognition brain models are mainstream nowadays. Within these, preparation is understood as an endogenous, top-down function that takes place prior to the actual perception of a stimulus and improves subsequent behavior. Neuroimaging has shown the existence of such preparatory activity separately in different cognitive domains, however no research to date has sought to uncover their potential similarities and differences. Two of these, often confounded in the literature, are Selective Attention (information relevance) and Perceptual Expectation (information probability). We used EEG to characterize the mechanisms that pre-activate specific contents in Attention and Expectation. In different blocks, participants were cued to the relevance or to the probability of target categories, faces vs. names, in a gender discrimination task. Multivariate Pattern (MVPA) and Representational Similarity Analyses (RSA) during the preparation window showed that both manipulations led to a significant, ramping-up prediction of the relevant or expected target category. However, classifiers trained with data from one condition did not generalize to the other, indicating the existence of unique anticipatory neural patterns. In addition, a Canonical Template Tracking procedure showed that there was stronger anticipatory perceptual reinstatement for relevance than for expectation blocks. Overall, the results indicate that preparation during attention and expectation acts through distinguishable neural mechanisms. These findings have important implications for current models of brain functioning, as they are a first step towards characterizing and dissociating the neural mechanisms involved in top-down anticipatory processing.
Collapse
|
39
|
Linton P. Minimal theory of 3D vision: new approach to visual scale and visual shape. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210455. [PMID: 36511406 PMCID: PMC9745885 DOI: 10.1098/rstb.2021.0455] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 07/20/2022] [Indexed: 12/15/2022] Open
Abstract
Since Kepler and Descartes in the early-1600s, vision science has been committed to a triangulation model of stereo vision. But in the early-1800s, we realized that disparities are responsible for stereo vision. And we have spent the past 200 years trying to shoe-horn disparities back into the triangulation account. The first part of this article argues that this is a mistake, and that stereo vision is a solution to a different problem: the eradication of rivalry between the two retinal images, rather than the triangulation of objects in space. This leads to a 'minimal theory of 3D vision', where 3D vision is no longer tied to estimating the scale, shape, and direction of objects in the world. The second part of this article then asks whether the other aspects of 3D vision, which go beyond stereo vision, really operate at the same level of visual experience as stereo vision? I argue they do not. Whilst we want a theory of real-world 3D vision, the literature risks giving us a theory of picture perception instead. And I argue for a two-stage theory, where our purely internal 'minimal' 3D percept (from stereo vision) is linked to the world through cognition. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Paul Linton
- Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University, New York, NY 10027, USA
- Italian Academy for Advanced Studies in America, Columbia University, New York, NY 10027, USA
- Visual Inference Lab, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| |
Collapse
|
40
|
Magnetoencephalography recordings reveal the neural mechanisms of auditory contributions to improved visual detection. Commun Biol 2023; 6:12. [PMID: 36604455 PMCID: PMC9816120 DOI: 10.1038/s42003-022-04335-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 12/01/2022] [Indexed: 01/07/2023] Open
Abstract
Sounds enhance the detection of visual stimuli while concurrently biasing an observer's decisions. To investigate the neural mechanisms that underlie such multisensory interactions, we decoded time-resolved Signal Detection Theory sensitivity and criterion parameters from magneto-encephalographic recordings of participants that performed a visual detection task. We found that sounds improved visual detection sensitivity by enhancing the accumulation and maintenance of perceptual evidence over time. Meanwhile, criterion decoding analyses revealed that sounds induced brain activity patterns that resembled the patterns evoked by an actual visual stimulus. These two complementary mechanisms of audiovisual interplay differed in terms of their automaticity: Whereas the sound-induced enhancement in visual sensitivity depended on participants being actively engaged in a detection task, we found that sounds activated the visual cortex irrespective of task demands, potentially inducing visual illusory percepts. These results challenge the classical assumption that sound-induced increases in false alarms exclusively correspond to decision-level biases.
Collapse
|
41
|
Bosten JM, Coen-Cagli R, Franklin A, Solomon SG, Webster MA. Calibrating Vision: Concepts and Questions. Vision Res 2022; 201:108131. [PMID: 37139435 PMCID: PMC10151026 DOI: 10.1016/j.visres.2022.108131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The idea that visual coding and perception are shaped by experience and adjust to changes in the environment or the observer is universally recognized as a cornerstone of visual processing, yet the functions and processes mediating these calibrations remain in many ways poorly understood. In this article we review a number of facets and issues surrounding the general notion of calibration, with a focus on plasticity within the encoding and representational stages of visual processing. These include how many types of calibrations there are - and how we decide; how plasticity for encoding is intertwined with other principles of sensory coding; how it is instantiated at the level of the dynamic networks mediating vision; how it varies with development or between individuals; and the factors that may limit the form or degree of the adjustments. Our goal is to give a small glimpse of an enormous and fundamental dimension of vision, and to point to some of the unresolved questions in our understanding of how and why ongoing calibrations are a pervasive and essential element of vision.
Collapse
Affiliation(s)
| | - Ruben Coen-Cagli
- Department of Systems Computational Biology, and Dominick P. Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx NY
| | | | - Samuel G Solomon
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, UK
| | | |
Collapse
|
42
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
43
|
Bailey KM, Giordano BL, Kaas AL, Smith FW. Decoding sounds depicting hand-object interactions in primary somatosensory cortex. Cereb Cortex 2022; 33:3621-3635. [PMID: 36045002 DOI: 10.1093/cercor/bhac296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/24/2022] [Accepted: 07/07/2022] [Indexed: 11/13/2022] Open
Abstract
Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas.
Collapse
Affiliation(s)
- Kerri M Bailey
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Bruno L Giordano
- Institut des Neurosciences de La Timone, CNRS UMR 7289, Université Aix-Marseille, Marseille CNRS UMR 7289, France
| | - Amanda L Kaas
- Department of Cognitive Neuroscience, Maastricht University, Maastricht 6229 EV, The Netherlands
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| |
Collapse
|
44
|
Yoneta N, Watanabe H, Shimojo A, Takano K, Saito T, Yagyu K, Shiraishi H, Yokosawa K, Boasen J. Magnetoencephalography Hyperscanning Evidence of Differing Cognitive Strategies Due to Social Role During Auditory Communication. Front Neurosci 2022; 16:790057. [PMID: 35983225 PMCID: PMC9380591 DOI: 10.3389/fnins.2022.790057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 06/23/2022] [Indexed: 11/30/2022] Open
Abstract
Auditory communication is an essential form of human social interaction. However, the intra-brain cortical-oscillatory drivers of auditory communication exchange remain relatively unexplored. We used improvisational music performance to simulate and capture the creativity and turn-taking dynamics of natural auditory communication. Using magnetoencephalography (MEG) hyperscanning in musicians, we targeted brain activity during periods of music communication imagery, and separately analyzed theta (5–7 Hz), alpha (8–13 Hz), and beta (15–29 Hz) source-level activity using a within-subjects, two-factor approach which considered the assigned social role of the subject (leader or follower) and whether communication responses were improvisational (yes or no). Theta activity related to improvisational communication and social role significantly interacted in the left isthmus cingulate cortex. Social role was furthermore differentiated by pronounced occipital alpha and beta amplitude increases suggestive of working memory retention engagement in Followers but not Leaders. The results offer compelling evidence for both musical and social neuroscience that the cognitive strategies, and correspondingly the memory and attention-associated oscillatory brain activities of interlocutors during communication differs according to their social role/hierarchy, thereby indicating that social role/hierarchy needs to be controlled for in social neuroscience research.
Collapse
Affiliation(s)
- Nano Yoneta
- Graduate School of Health Sciences, Hokkaido University, Sapporo, Japan
| | - Hayato Watanabe
- Faculty of Health Sciences, Hokkaido University, Sapporo, Japan
- Department of Child Studies, Toyooka Junior College, Toyooka, Japan
- Department of Child and Adolescent Psychiatry, Hokkaido University Hospital, Sapporo, Japan
| | - Atsushi Shimojo
- Department of Child and Adolescent Psychiatry, Hokkaido University Hospital, Sapporo, Japan
| | - Kazuyoshi Takano
- Graduate School of Health Sciences, Hokkaido University, Sapporo, Japan
| | - Takuya Saito
- Department of Child and Adolescent Psychiatry, Hokkaido University Hospital, Sapporo, Japan
| | - Kazuyori Yagyu
- Department of Child and Adolescent Psychiatry, Hokkaido University Hospital, Sapporo, Japan
| | - Hideaki Shiraishi
- Department of Pediatrics, Hokkaido University Hospital, Sapporo, Japan
| | - Koichi Yokosawa
- Faculty of Health Sciences, Hokkaido University, Sapporo, Japan
- *Correspondence: Koichi Yokosawa,
| | - Jared Boasen
- Faculty of Health Sciences, Hokkaido University, Sapporo, Japan
- Tech3Lab, HEC Montréal, Montréal, QC, Canada
| |
Collapse
|
45
|
Musz E, Loiotile R, Chen J, Cusack R, Bedny M. Naturalistic stimuli reveal a sensitive period in cross modal responses of visual cortex: Evidence from adult-onset blindness. Neuropsychologia 2022; 172:108277. [PMID: 35636634 PMCID: PMC9648859 DOI: 10.1016/j.neuropsychologia.2022.108277] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/28/2022] [Accepted: 05/25/2022] [Indexed: 01/31/2023]
Abstract
How do life experiences impact cortical function? In people who are born blind, the "visual" cortices are recruited during nonvisual tasks, such as Braille reading and sound localization. Do visual cortices have a latent capacity to respond to nonvisual information throughout the lifespan? Alternatively, is there a sensitive period of heightened plasticity that makes visual cortex repurposing especially possible during childhood? To gain insight into these questions, we leveraged meaningful naturalistic auditory stimuli to simultaneously engage a broad range of cognitive domains and quantify cross-modal responses across congenitally blind (n = 22), adult-onset blind (vision loss >18 years-of-age, n = 14) and sighted (n = 22) individuals. During fMRI scanning, participants listened to two types of meaningful naturalistic auditory stimuli: excerpts from movies and a spoken narrative. As controls, participants heard the same narrative with the sentences shuffled and the narrative played backwards (i.e., meaningless sounds). We correlated the voxel-wise timecourses of different participants within condition and group. For all groups, all stimulus conditions induced synchrony in auditory cortex while only the narrative stimuli synchronized responses in higher-cognitive fronto-parietal and temporal regions. As previously reported, inter-subject synchrony in visual cortices was higher in congenitally blind than sighted blindfolded participants and this between-group difference was particularly pronounced for meaningful stimuli (movies and narrative). Critically, visual cortex synchrony was no higher in adult-onset blind than sighted blindfolded participants and did not increase with blindness duration. Sensitive period plasticity enables cross-modal repurposing in visual cortices.
Collapse
Affiliation(s)
- Elizabeth Musz
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Rita Loiotile
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Janice Chen
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Rhodri Cusack
- Trinity College Institute of Neuroscience, School of Psychology, Trinity College Dublin, Dublin, Ireland
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
46
|
De Winne J, Devos P, Leman M, Botteldooren D. With No Attention Specifically Directed to It, Rhythmic Sound Does Not Automatically Facilitate Visual Task Performance. Front Psychol 2022; 13:894366. [PMID: 35756201 PMCID: PMC9226390 DOI: 10.3389/fpsyg.2022.894366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/19/2022] [Indexed: 11/22/2022] Open
Abstract
In a century where humans and machines—powered by artificial intelligence or not—increasingly work together, it is of interest to understand human processing of multi-sensory stimuli in relation to attention and working memory. This paper explores whether and when supporting visual information with rhythmic auditory stimuli can optimize multi-sensory information processing. In turn, this can make the interaction between humans or between machines and humans more engaging, rewarding and activating. For this purpose a novel working memory paradigm was developed where participants are presented with a series of five target digits randomly interchanged with five distractor digits. Their goal is to remember the target digits and recall them orally. Depending on the condition support is provided by audio and/or rhythm. It is expected that the sound will lead to a better performance. It is also expected that this effect of sound is different in case of rhythmic and non-rhythmic sound. Last but not least, some variability is expected across participants. To make correct conclusions, the data of the experiment was statistically analyzed in a classic way, but also predictive models were developed in order to predict outcomes based on a range of input variables related to the experiment and the participant. The effect of auditory support could be confirmed, but no difference was observed between rhythmic and non-rhythmic sounds. Overall performance was indeed affected by individual differences, such as visual dominance or perceived task difficulty. Surprisingly a music education did not significantly affect the performance and even tended toward a negative effect. To better understand the underlying processes of attention, also brain activation data, e.g., by means of electroencephalography (EEG), should be recorded. This approach can be subject to a future work.
Collapse
Affiliation(s)
- Jorg De Winne
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium.,Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Paul Devos
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| | - Marc Leman
- Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| |
Collapse
|
47
|
Johansson C, Folgerø PO. Is Reduced Visual Processing the Price of Language? Brain Sci 2022; 12:brainsci12060771. [PMID: 35741656 PMCID: PMC9221435 DOI: 10.3390/brainsci12060771] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/06/2022] [Accepted: 06/08/2022] [Indexed: 02/01/2023] Open
Abstract
We suggest a later timeline for full language capabilities in Homo sapiens, placing the emergence of language over 200,000 years after the emergence of our species. The late Paleolithic period saw several significant changes. Homo sapiens became more gracile and gradually lost significant brain volumes. Detailed realistic cave paintings disappeared completely, and iconic/symbolic ones appeared at other sites. This may indicate a shift in perceptual abilities, away from an accurate perception of the present. Language in modern humans interact with vision. One example is the McGurk effect. Studies show that artistic abilities may improve when language-related brain areas are damaged or temporarily knocked out. Language relies on many pre-existing non-linguistic functions. We suggest that an overwhelming flow of perceptual information, vision, in particular, was an obstacle to language, as is sometimes implied in autism with relative language impairment. We systematically review the recent research literature investigating the relationship between language and perception. We see homologues of language-relevant brain functions predating language. Recent findings show brain lateralization for communicative gestures in other primates without language, supporting the idea that a language-ready brain may be overwhelmed by raw perception, thus blocking overt language from evolving. We find support in converging evidence for a change in neural organization away from raw perception, thus pushing the emergence of language closer in time. A recent origin of language makes it possible to investigate the genetic origins of language.
Collapse
|
48
|
Brang D, Plass J, Sherman A, Stacey WC, Wasade VS, Grabowecky M, Ahn E, Towle VL, Tao JX, Wu S, Issa NP, Suzuki S. Visual cortex responds to sound onset and offset during passive listening. J Neurophysiol 2022; 127:1547-1563. [PMID: 35507478 DOI: 10.1152/jn.00164.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sounds enhance our ability to detect, localize, and respond to co-occurring visual targets. Research suggests that sounds improve visual processing by resetting the phase of ongoing oscillations in visual cortex. However, it remains unclear what information is relayed from the auditory system to visual areas and if sounds modulate visual activity even in the absence of visual stimuli (e.g., during passive listening). Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three forms of auditory information during a passive listening task: auditory onset responses, auditory offset responses, and rhythmic entrainment to sounds. Because some auditory neurons respond to both sound onsets and offsets, visual timing and duration processing may benefit from each. Additionally, if auditory entrainment information is relayed to visual cortex, it could support the processing of complex stimulus dynamics that are aligned between auditory and visual stimuli. Results demonstrate that in visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to sound modulation frequencies. These findings suggest that activity in visual cortex (as measured with iEEG in response to auditory stimuli) may not be affected by temporally fine-grained auditory stimulus dynamics during passive listening (though it remains possible that this signal may be observable with simultaneous auditory-visual stimuli). Moreover, auditory responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by auditory discontinuities.
Collapse
Affiliation(s)
- David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - John Plass
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Aleksandra Sherman
- Department of Cognitive Science, Occidental College, Los Angeles, CA, United States
| | - William C Stacey
- Department of Neurology, University of Michigan, Ann Arbor, MI, United States
| | | | - Marcia Grabowecky
- Department of Psychology, Northwestern University, Evanston, IL, United States
| | - EunSeon Ahn
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Vernon L Towle
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - James X Tao
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Shasha Wu
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Naoum P Issa
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Satoru Suzuki
- Department of Psychology, Northwestern University, Evanston, IL, United States
| |
Collapse
|
49
|
Sommer VR, Sander MC. Contributions of representational distinctiveness and stability to memory performance and age differences. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2022; 29:443-462. [PMID: 34939904 DOI: 10.1080/13825585.2021.2019184] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Long-standing theories of cognitive aging suggest that memory decline is associated with age-related differences in the way information is neurally represented. Multivariate pattern similarity analyses enabled researchers to take a representational perspective on brain and cognition, and allowed them to study the properties of neural representations that support successful episodic memory. Two representational properties have been identified as crucial for memory performance, namely the distinctiveness and the stability of neural representations. Here, we review studies that used multivariate analysis tools for different neuroimaging techniques to clarify how these representational properties relate to memory performance across adulthood. While most evidence on age differences in neural representations involved stimulus category information , recent studies demonstrated that particularly item-level stability and specificity of activity patterns are linked to memory success and decline during aging. Overall, multivariate methods offer a versatile tool for our understanding of age differences in the neural representations underlying memory.
Collapse
Affiliation(s)
- Verena R Sommer
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Myriam C Sander
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
50
|
Neural reactivation and judgements of vividness reveal separable contributions to mnemonic representation. Neuroimage 2022; 255:119205. [PMID: 35427774 DOI: 10.1016/j.neuroimage.2022.119205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 04/04/2022] [Accepted: 04/08/2022] [Indexed: 11/22/2022] Open
Abstract
Mnemonic representations vary in fidelity, sharpness, and strength-qualities that can be examined using both introspective judgements of mental states and objective measures of brain activity. Subjective and objective measures are both valid ways of "reading out" the content of someone's internal mnemonic states, each with different strengths and weaknesses. St-Laurent and colleagues (2015) compared the neural correlates of memory vividness ratings with patterns of neural reactivation evoked during memory recall and found considerable overlap between the two, suggesting a common neural basis underlying these different markers of representational quality. Here we extended this work with meta-analytic methods by pooling together four neuroimaging datasets in order to contrast the neural substrates of neural reactivation and those of vividness judgements. While reactivation and vividness judgements correlated positively with one another and were associated with common univariate activity in the dorsal attention network and anterior hippocampus, some notable differences were also observed. Vividness judgments were tied to stronger activation in the striatum and dorsal attention network, together with activity suppression in default mode network nodes. We also observed a trend for reactivation to be more closely associated with early visual cortex activity. A mediation analysis found support for the hypothesis that neural reactivation is necessary for memory vividness, with activity in the anterior hippocampus associated with greater reactivation. Our results suggest that neural reactivation and vividness judgements reflect common mnemonic processes but differ in the extent to which they engage effortful, attentional processes. Additionally, the similarity between reactivation and vividness appears to arise, partly, through hippocampal engagement during memory retrieval.
Collapse
|