1
|
Mantegna F, Olivetti E, Schwedhelm P, Baldauf D. Covariance-based decoding reveals a category-specific functional connectivity network for imagined visual objects. Neuroimage 2025; 311:121171. [PMID: 40139516 DOI: 10.1016/j.neuroimage.2025.121171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2024] [Revised: 03/21/2025] [Accepted: 03/24/2025] [Indexed: 03/29/2025] Open
Abstract
The coordination of different brain regions is required for the visual imagery of complex objects (e.g., faces and places). Short-range connectivity within sensory areas is necessary to construct the mental image. Long-range connectivity between control and sensory areas is necessary to re-instantiate and maintain the mental image. While dynamic changes in functional connectivity are expected during visual imagery, it is unclear whether a category-specific network exists in which the strength and the spatial destination of the connections vary depending on the imagery target. In this magnetoencephalography study, we used a minimally constrained experimental paradigm wherein imagery categories were prompted using visual word cues only, and we decoded face versus place imagery based on their underlying functional connectivity patterns as estimated from the spatial covariance across brain regions. A subnetwork analysis further disentangled the contribution of different connections. The results show that face and place imagery can be decoded from both short-range and long-range connections. Overall, the results show that imagined object categories can be distinguished based on functional connectivity patterns observed in a category-specific network. Notably, functional connectivity estimates rely on purely endogenous brain signals suggesting that an external reference is not necessary to elicit such category-specific network dynamics.
Collapse
Affiliation(s)
- Francesco Mantegna
- Department of Psychology, New York University, New York, NY 10003, USA; Department of Engineering Science, Oxford University, Oxford, Oxfordshire, United Kingdom; CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy.
| | - Emanuele Olivetti
- NeuroInformatics Laboratory (NILab), Bruno Kessler Foundation (FBK), Mattarello, TN 38100, Italy; CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy
| | - Philipp Schwedhelm
- Functional Imaging Laboratory, German Primate Center - Leibniz Institute for Primate Research, Goettingen, 37077, Germany; CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy
| | - Daniel Baldauf
- CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy
| |
Collapse
|
2
|
Bruera A, Poesio M. Electroencephalography Searchlight Decoding Reveals Person- and Place-specific Responses for Semantic Category and Familiarity. J Cogn Neurosci 2025; 37:135-154. [PMID: 38319891 DOI: 10.1162/jocn_a_02125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Proper names are linguistic expressions referring to unique entities, such as individual people or places. This sets them apart from other words like common nouns, which refer to generic concepts. And yet, despite both being individual entities, one's closest friend and one's favorite city are intuitively associated with very different pieces of knowledge-face, voice, social relationship, autobiographical experiences for the former, and mostly visual and spatial information for the latter. Neuroimaging research has revealed the existence of both domain-general and domain-specific brain correlates of semantic processing of individual entities; however, it remains unclear how such commonalities and similarities operate over a fine-grained temporal scale. In this work, we tackle this question using EEG and multivariate (time-resolved and searchlight) decoding analyses. We look at when and where we can accurately decode the semantic category of a proper name and whether we can find person- or place-specific effects of familiarity, which is a modality-independent dimension and therefore avoids sensorimotor differences inherent among the two categories. Semantic category can be decoded in a time window and with spatial localization typically associated with lexical semantic processing. Regarding familiarity, our results reveal that it is easier to distinguish patterns of familiarity-related evoked activity for people, as opposed to places, in both early and late time windows. Second, we discover that within the early responses, both domain-general (left posterior-lateral) and domain-specific (right fronto-temporal, only for people) neural patterns can be individuated, suggesting the existence of person-specific processes.
Collapse
Affiliation(s)
- Andrea Bruera
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Queen Mary University of London
| | | |
Collapse
|
3
|
Bruera A, Poesio M. Family lexicon: Using language models to encode memories of personally familiar and famous people and places in the brain. PLoS One 2024; 19:e0291099. [PMID: 39576771 PMCID: PMC11584084 DOI: 10.1371/journal.pone.0291099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 09/15/2024] [Indexed: 11/24/2024] Open
Abstract
Knowledge about personally familiar people and places is extremely rich and varied, involving pieces of semantic information connected in unpredictable ways through past autobiographical memories. In this work, we investigate whether we can capture brain processing of personally familiar people and places using subject-specific memories, after transforming them into vectorial semantic representations using language models. First, we asked participants to provide us with the names of the closest people and places in their lives. Then we collected open-ended answers to a questionnaire, aimed at capturing various facets of declarative knowledge. We collected EEG data from the same participants while they were reading the names and subsequently mentally visualizing their referents. As a control set of stimuli, we also recorded evoked responses to a matched set of famous people and places. We then created original semantic representations for the individual entities using language models. For personally familiar entities, we used the text of the answers to the questionnaire. For famous entities, we employed their Wikipedia page, which reflects shared declarative knowledge about them. Through whole-scalp time-resolved and searchlight encoding analyses, we found that we could capture how the brain processes one's closest people and places using person-specific answers to questionnaires, as well as famous entities. Overall encoding performance was significant in a large time window (200-800ms). Using spatio-temporal EEG searchlight, we found that we could predict brain responses significantly better than chance earlier (200-500ms) in bilateral temporo-parietal electrodes and later (500-700ms) in frontal and posterior central electrodes. We also found that XLM, a contextualized (or large) language model, provided superior encoding scores when compared with a simpler static language model as word2vec. Overall, these results indicate that language models can capture subject-specific semantic representations as they are processed in the human brain, by exploiting small-scale distributional lexical data.
Collapse
Affiliation(s)
- Andrea Bruera
- Max Planck Institute for Human Cognitive and Brain Sciences, Cognition and Plasticity Research Group, Leipzig, Germany
- Queen Mary University of London, London, United Kingdom
| | - Massimo Poesio
- Max Planck Institute for Human Cognitive and Brain Sciences, Cognition and Plasticity Research Group, Leipzig, Germany
| |
Collapse
|
4
|
Ye Q, Fidalgo C, Byrne P, Muñoz LE, Cant JS, Lee ACH. Using imagination and the contents of memory to create new scene and object representations: A functional MRI study. Neuropsychologia 2024; 204:109000. [PMID: 39271053 DOI: 10.1016/j.neuropsychologia.2024.109000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 09/09/2024] [Accepted: 09/10/2024] [Indexed: 09/15/2024]
Abstract
Humans can use the contents of memory to construct scenarios and events that they have not encountered before, a process colloquially known as imagination. Much of our current understanding of the neural mechanisms mediating imagination is limited by paradigms that rely on participants' subjective reports of imagined content. Here, we used a novel behavioral paradigm that was designed to systematically evaluate the contents of an individual's imagination. Participants first learned the layout of four distinct rooms containing five wall segments with differing geometrical characteristics, each associated with a unique object. During functional MRI, participants were then shown two different wall segments or objects on each trial and asked to first, retrieve the associated objects or walls, respectively (retrieval phase) and then second, imagine the two objects side-by-side or combine the two wall segments (imagination phase). Importantly, the contents of each participant's imagination were interrogated by having them make a same/different judgment about the properties of the imagined objects or scenes. Using univariate and multivariate analyses, we observed widespread activity across occipito-temporal cortex for the retrieval of objects and for the imaginative creation of scenes. Interestingly, a classifier, whether trained on the imagination or retrieval data, was able to successfully differentiate the neural patterns associated with the imagination of scenes from that of objects. Our results reveal neural differences in the cued retrieval of object and scene memoranda, demonstrate that different representations underlie the creation and/or imagination of scene and object content, and highlight a novel behavioral paradigm that can be used to systematically evaluate the contents of an individual's imagination.
Collapse
Affiliation(s)
- Qun Ye
- Intelligent Laboratory of Child and Adolescent Mental Health and Crisis Intervention of Zhejiang Province, School of Psychology, Zhejiang Normal University, Jinhua, 321004, Zhejiang, China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, 321004, Zhejiang, China
| | - Celia Fidalgo
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, M1C 1A4, Canada
| | - Patrick Byrne
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, M1C 1A4, Canada
| | - Luis Eduardo Muñoz
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, M1C 1A4, Canada
| | - Jonathan S Cant
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, M1C 1A4, Canada.
| | - Andy C H Lee
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, M1C 1A4, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, M6A 2E1, Canada.
| |
Collapse
|
5
|
Montabes de la Cruz BM, Abbatecola C, Luciani RS, Paton AT, Bergmann J, Vetter P, Petro LS, Muckli LF. Decoding sound content in the early visual cortex of aphantasic participants. Curr Biol 2024; 34:5083-5089.e3. [PMID: 39419030 DOI: 10.1016/j.cub.2024.09.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 08/21/2024] [Accepted: 09/04/2024] [Indexed: 10/19/2024]
Abstract
Listening to natural auditory scenes leads to distinct neuronal activity patterns in the early visual cortex (EVC) of blindfolded sighted and congenitally blind participants.1,2 This pattern of sound decoding is organized by eccentricity, with the accuracy of auditory information increasing from foveal to far peripheral retinotopic regions in the EVC (V1, V2, and V3). This functional organization by eccentricity is predicted by primate anatomical connectivity,3,4 where cortical feedback projections from auditory and other non-visual areas preferentially target the periphery of early visual areas. In congenitally blind participants, top-down feedback projections to the visual cortex proliferate,5 which might account for even higher sound-decoding accuracy in the EVC compared with blindfolded sighted participants.2 In contrast, studies in participants with aphantasia suggest an impairment of feedback projections to early visual areas, leading to a loss of visual imagery experience.6,7 This raises the question of whether impaired visual feedback pathways in aphantasia also reduce the transmission of auditory information to early visual areas. We presented auditory scenes to 23 blindfolded aphantasic participants. We found overall decreased sound decoding in early visual areas compared to blindfolded sighted ("control") and blind participants. We further explored this difference by modeling eccentricity effects across the blindfolded control, blind, and aphantasia datasets, and with a whole-brain searchlight analysis. Our findings suggest that the feedback of auditory content to the EVC is reduced in aphantasic participants. Reduced top-down projections might lead to both less sound decoding and reduced subjective experience of visual imagery.
Collapse
Affiliation(s)
- Belén M Montabes de la Cruz
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK
| | - Clement Abbatecola
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK
| | - Roberto S Luciani
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; School of Computing Science, College of Science and Engineering, University of Glasgow, Glasgow G12 8QQ, UK
| | - Angus T Paton
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK
| | - Johanna Bergmann
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1, Leipzig 04303, Germany
| | - Petra Vetter
- Visual & Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg 1700, Switzerland
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK
| | - Lars F Muckli
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK; Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK.
| |
Collapse
|
6
|
Megla E, Prasad D, Bainbridge WA. The Neural Underpinnings of Aphantasia: A Case Study of Identical Twins. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.23.614521. [PMID: 39386622 PMCID: PMC11463508 DOI: 10.1101/2024.09.23.614521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
Aphantasia is a condition characterized by reduced voluntary mental imagery. As this lack of mental imagery disrupts visual memory, understanding the nature of this condition can provide important insight into memory, perception, and imagery. Here, we leveraged the power of case studies to better characterize this condition by running a pair of identical twins, one with aphantasia and one without, through mental imagery tasks in an fMRI scanner. We identified objective, neural measures of aphantasia, finding less visual information in their memories which may be due to lower connectivity between frontoparietal and occipitotemporal lobes of the brain. However, despite this difference, we surprisingly found more visual information in the aphantasic twin's memory than anticipated, suggesting that aphantasia is a spectrum rather than a discrete condition.
Collapse
Affiliation(s)
- Emma Megla
- Department of Psychology, University of Chicago, Chicago, IL
| | - Deepasri Prasad
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH
| | - Wilma A. Bainbridge
- Department of Psychology, University of Chicago, Chicago, IL
- Neuroscience Institute, University of Chicago, Chicago, IL
| |
Collapse
|
7
|
Spagna A, Heidenry Z, Miselevich M, Lambert C, Eisenstadt BE, Tremblay L, Liu Z, Liu J, Bartolomeo P. Visual mental imagery: Evidence for a heterarchical neural architecture. Phys Life Rev 2024; 48:113-131. [PMID: 38217888 DOI: 10.1016/j.plrev.2023.12.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 12/26/2023] [Indexed: 01/15/2024]
Abstract
Theories of Visual Mental Imagery (VMI) emphasize the processes of retrieval, modification, and recombination of sensory information from long-term memory. Yet, only few studies have focused on the behavioral mechanisms and neural correlates supporting VMI of stimuli from different semantic domains. Therefore, we currently have a limited understanding of how the brain generates and maintains mental representations of colors, faces, shapes - to name a few. Such an undetermined scenario renders unclear the organizational structure of neural circuits supporting VMI, including the role of the early visual cortex. We aimed to fill this gap by reviewing the scientific literature of five semantic domains: visuospatial, face, colors, shapes, and letters imagery. Linking theory to evidence from over 60 different experimental designs, this review highlights three main points. First, there is no consistent activity in the early visual cortex across all VMI domains, contrary to the prediction of the dominant model. Second, there is consistent activity of the frontoparietal networks and the left hemisphere's fusiform gyrus during voluntary VMI irrespective of the semantic domain investigated. We propose that these structures are part of a domain-general VMI sub-network. Third, domain-specific information engages specific regions of the ventral and dorsal cortical visual pathways. These regions partly overlap with those found in visual perception studies (e.g., fusiform face area for faces imagery; lingual gyrus for color imagery). Altogether, the reviewed evidence suggests the existence of domain-general and domain-specific mechanisms of VMI selectively engaged by stimulus-specific properties (e.g., colors or faces). These mechanisms would be supported by an organizational structure mixing vertical and horizontal connections (heterarchy) between sub-networks for specific stimulus domains. Such a heterarchical organization of VMI makes different predictions from current models of VMI as reversed perception. Our conclusions set the stage for future research, which should aim to characterize the spatiotemporal dynamics and interactions among key regions of this architecture giving rise to visual mental images.
Collapse
Affiliation(s)
- Alfredo Spagna
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA.
| | - Zoe Heidenry
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA
| | | | - Chloe Lambert
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA
| | | | - Laura Tremblay
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California; Department of Neurology, VA Northern California Health Care System, Martinez, California
| | - Zixin Liu
- Department of Human Development, Teachers College, Columbia University, NY, 10027, USA
| | - Jianghao Liu
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris 10027, France; Dassault Systèmes, Vélizy-Villacoublay, France
| | - Paolo Bartolomeo
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris 10027, France
| |
Collapse
|
8
|
Chen L, Cichy RM, Kaiser D. Alpha-frequency feedback to early visual cortex orchestrates coherent naturalistic vision. SCIENCE ADVANCES 2023; 9:eadi2321. [PMID: 37948520 PMCID: PMC10637741 DOI: 10.1126/sciadv.adi2321] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 10/12/2023] [Indexed: 11/12/2023]
Abstract
During naturalistic vision, the brain generates coherent percepts by integrating sensory inputs scattered across the visual field. Here, we asked whether this integration process is mediated by rhythmic cortical feedback. In electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) experiments, we experimentally manipulated integrative processing by changing the spatiotemporal coherence of naturalistic videos presented across visual hemifields. Our EEG data revealed that information about incoherent videos is coded in feedforward-related gamma activity while information about coherent videos is coded in feedback-related alpha activity, indicating that integration is indeed mediated by rhythmic activity. Our fMRI data identified scene-selective cortex and human middle temporal complex (hMT) as likely sources of this feedback. Analytically combining our EEG and fMRI data further revealed that feedback-related representations in the alpha band shape the earliest stages of visual processing in cortex. Together, our findings indicate that the construction of coherent visual experiences relies on cortical feedback rhythms that fully traverse the visual hierarchy.
Collapse
Affiliation(s)
- Lixiang Chen
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Radoslaw M. Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Gießen 35392, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg 35032, Germany
| |
Collapse
|
9
|
Li S, Zeng X, Shao Z, Yu Q. Neural Representations in Visual and Parietal Cortex Differentiate between Imagined, Perceived, and Illusory Experiences. J Neurosci 2023; 43:6508-6524. [PMID: 37582626 PMCID: PMC10513072 DOI: 10.1523/jneurosci.0592-23.2023] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 07/09/2023] [Accepted: 08/04/2023] [Indexed: 08/17/2023] Open
Abstract
Humans constantly receive massive amounts of information, both perceived from the external environment and imagined from the internal world. To function properly, the brain needs to correctly identify the origin of information being processed. Recent work has suggested common neural substrates for perception and imagery. However, it has remained unclear how the brain differentiates between external and internal experiences with shared neural codes. Here we tested this question in human participants (male and female) by systematically investigating the neural processes underlying the generation and maintenance of visual information from voluntary imagery, veridical perception, and illusion. The inclusion of illusion allowed us to differentiate between objective and subjective internality: while illusion has an objectively internal origin and can be viewed as involuntary imagery, it is also subjectively perceived as having an external origin like perception. Combining fMRI, eye-tracking, multivariate decoding, and encoding approaches, we observed superior orientation representations in parietal cortex during imagery compared with perception, and conversely in early visual cortex. This imagery dominance gradually developed along a posterior-to-anterior cortical hierarchy from early visual to parietal cortex, emerged in the early epoch of imagery and sustained into the delay epoch, and persisted across varied imagined contents. Moreover, representational strength of illusion was more comparable to imagery in early visual cortex, but more comparable to perception in parietal cortex, suggesting content-specific representations in parietal cortex differentiate between subjectively internal and external experiences, as opposed to early visual cortex. These findings together support a domain-general engagement of parietal cortex in internally generated experience.SIGNIFICANCE STATEMENT How does the brain differentiate between imagined and perceived experiences? Combining fMRI, eye-tracking, multivariate decoding, and encoding approaches, the current study revealed enhanced stimulus-specific representations in visual imagery originating from parietal cortex, supporting the subjective experience of imagery. This neural principle was further validated by evidence from visual illusion, wherein illusion resembled perception and imagery at different levels of cortical hierarchy. Our findings provide direct evidence for the critical role of parietal cortex as a domain-general region for content-specific imagery, and offer new insights into the neural mechanisms underlying the differentiation between subjectively internal and external experiences.
Collapse
Affiliation(s)
- Siyi Li
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Xuemei Zeng
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Zhujun Shao
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qing Yu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
10
|
Zaleskiewicz T, Traczyk J, Sobkow A, Fulawka K, Megías-Robles A. Visualizing risky situations induces a stronger neural response in brain areas associated with mental imagery and emotions than visualizing non-risky situations. Front Hum Neurosci 2023; 17:1207364. [PMID: 37795209 PMCID: PMC10546025 DOI: 10.3389/fnhum.2023.1207364] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 08/31/2023] [Indexed: 10/06/2023] Open
Abstract
In an fMRI study, we tested the prediction that visualizing risky situations induces a stronger neural response in brain areas associated with mental imagery and emotions than visualizing non-risky and more positive situations. We assumed that processing mental images that allow for "trying-out" the future has greater adaptive importance for risky than non-risky situations, because the former can generate severe negative outcomes. We identified several brain regions that were activated when participants produced images of risky situations and these regions overlap with brain areas engaged in visual, speech, and movement imagery. We also found that producing images of risky situations, in contrast to non-risky situations, was associated with increased neural activation in the insular cortex and cerebellum-the regions involved, among other functions, in emotional processing. Finally, we observed an increased BOLD signal in the cingulate gyrus associated with reward-based decision making and monitoring of decision outcomes. In summary, risky situations increased neural activation in brain areas involved in mental imagery, emotional processing, and decision making. These findings imply that the evaluation of everyday risky situations may be driven by emotional responses that result from mental imagery.
Collapse
Affiliation(s)
- Tomasz Zaleskiewicz
- Faculty of Psychology in Wrocław, SWPS University of Social Sciences and Humanities, Wrocław, Poland
| | - Jakub Traczyk
- Faculty of Psychology in Wrocław, SWPS University of Social Sciences and Humanities, Wrocław, Poland
| | - Agata Sobkow
- Faculty of Psychology in Wrocław, SWPS University of Social Sciences and Humanities, Wrocław, Poland
| | - Kamil Fulawka
- Faculty of Psychology in Wrocław, SWPS University of Social Sciences and Humanities, Wrocław, Poland
| | | |
Collapse
|
11
|
Liao MR, Grindell JD, Anderson BA. A comparison of mental imagery and perceptual cueing across domains of attention. Atten Percept Psychophys 2023; 85:1834-1845. [PMID: 37349626 DOI: 10.3758/s13414-023-02747-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/10/2023] [Indexed: 06/24/2023]
Abstract
Mental imagery and perceptual cues can influence subsequent visual search performance, but examination of this influence has been limited to low-level features like colors and shapes. The present study investigated how the two types of cues influence low-level visual search, visual search with realistic objects, and executive attention. On each trial, participants were either presented with a colored square or tasked with using mental imagery to generate a colored square that could match the target (valid trial) or distractor (invalid trial) in the search array that followed (Experiments 1 and 3). In a separate experiment, the colored square displayed or generated was replaced with a realistic object in a specific category that could appear as a target or distractor in the search array (Experiment 2). Although the displayed object was in the same category as an item in the search display, they were never a perfect match (e.g., jam drop cookie instead of chocolate chip). Our findings revealed that the facilitation of performance on valid trials compared with invalid trials was greater for perceptual cues than imagery cues for low-level features (Experiment 1), whereas the influence of these two types of cues was comparable in the context of realistic objects (Experiment 2) The influence of mental imagery appears not to extend to the resolution of conflict generated by color-word Stroop stimuli (Experiment 3). The present findings extend our understanding of how mental imagery influences the allocation of attention.
Collapse
Affiliation(s)
- Ming-Ray Liao
- Department of Psychological and Brain Sciences, Texas A&M University, 4235 TAMU, College Station, TX, 77843-4235, USA.
| | - James D Grindell
- Department of Psychological and Brain Sciences, Texas A&M University, 4235 TAMU, College Station, TX, 77843-4235, USA
| | - Brian A Anderson
- Department of Psychological and Brain Sciences, Texas A&M University, 4235 TAMU, College Station, TX, 77843-4235, USA
| |
Collapse
|