1
|
Della Sala S, Zhao B. The devil is in the method details. Comment on 'Visual mental imagery: Evidence for a heterarchical neural architecture' by Spagna et al. Phys Life Rev 2024; 49:97-99. [PMID: 38569378 DOI: 10.1016/j.plrev.2024.03.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 03/20/2024] [Indexed: 04/05/2024]
Affiliation(s)
- Sergio Della Sala
- Human Cognitive Neuroscience, Psychology Department, University of Edinburgh, UK.
| | - Binglei Zhao
- Institution of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
2
|
Stecher R, Kaiser D. Representations of imaginary scenes and their properties in cortical alpha activity. Sci Rep 2024; 14:12796. [PMID: 38834699 DOI: 10.1038/s41598-024-63320-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/28/2024] [Indexed: 06/06/2024] Open
Abstract
Imagining natural scenes enables us to engage with a myriad of simulated environments. How do our brains generate such complex mental images? Recent research suggests that cortical alpha activity carries information about individual objects during visual imagery. However, it remains unclear if more complex imagined contents such as natural scenes are similarly represented in alpha activity. Here, we answer this question by decoding the contents of imagined scenes from rhythmic cortical activity patterns. In an EEG experiment, participants imagined natural scenes based on detailed written descriptions, which conveyed four complementary scene properties: openness, naturalness, clutter level and brightness. By conducting classification analyses on EEG power patterns across neural frequencies, we were able to decode both individual imagined scenes as well as their properties from the alpha band, showing that also the contents of complex visual images are represented in alpha rhythms. A cross-classification analysis between alpha power patterns during the imagery task and during a perception task, in which participants were presented images of the described scenes, showed that scene representations in the alpha band are partly shared between imagery and late stages of perception. This suggests that alpha activity mediates the top-down re-activation of scene-related visual contents during imagery.
Collapse
Affiliation(s)
- Rico Stecher
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen, 35392, Gießen, Germany.
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen, 35392, Gießen, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps-University Marburg and Justus Liebig University Gießen, 35032, Marburg, Germany
| |
Collapse
|
3
|
Dietz CD, Albonico A, Tree JJ, Barton JJS. Visual imagery deficits in posterior cortical atrophy. Cogn Neuropsychol 2024:1-16. [PMID: 38698499 DOI: 10.1080/02643294.2024.2346362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
Visual imagery has a close overlapping relationship with visual perception. Posterior cortical atrophy (PCA) is a neurodegenerative syndrome marked by early impairments in visuospatial processing and visual object recognition. We asked whether PCA would therefore also be marked by deficits in visual imagery, tested using objective forced-choice questionnaires, and whether imagery deficits would be selective for certain properties. We recruited four patients with PCA and a patient with integrative visual agnosia due to bilateral occipitotemporal strokes for comparison. We administered a test battery probing imagery for object shape, size, colour lightness, hue, upper-case letters, lower-case letters, word shape, letter construction, and faces. All subjects showed significant impairments in visual imagery, with imagery for lower-case letters most likely to be spared. We conclude that PCA subjects can show severe deficits in visual imagery. Further work is needed to establish how frequently this occurs and how early it can be found.
Collapse
Affiliation(s)
- Connor D Dietz
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology) and Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Andrea Albonico
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology) and Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Jeremy J Tree
- Department of Psychology, Swansea University, Swansea, UK
| | - Jason J S Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology) and Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
4
|
Dijkstra N. Uncovering the Role of the Early Visual Cortex in Visual Mental Imagery. Vision (Basel) 2024; 8:29. [PMID: 38804350 PMCID: PMC11130976 DOI: 10.3390/vision8020029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/25/2024] [Accepted: 04/30/2024] [Indexed: 05/29/2024] Open
Abstract
The question of whether the early visual cortex (EVC) is involved in visual mental imagery remains a topic of debate. In this paper, I propose that the inconsistency in findings can be explained by the unique challenges associated with investigating EVC activity during imagery. During perception, the EVC processes low-level features, which means that activity is highly sensitive to variation in visual details. If the EVC has the same role during visual mental imagery, any change in the visual details of the mental image would lead to corresponding changes in EVC activity. Within this context, the question should not be whether the EVC is 'active' during imagery but how its activity relates to specific imagery properties. Studies using methods that are sensitive to variation in low-level features reveal that imagery can recruit the EVC in similar ways as perception. However, not all mental images contain a high level of visual details. Therefore, I end by considering a more nuanced view, which states that imagery can recruit the EVC, but that does not mean that it always does so.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Department of Imaging Neuroscience, Institute of Neurology, University College London, London WC1E 6BT, UK
| |
Collapse
|
5
|
Hashim S, Küssner MB, Weinreich A, Omigie D. The neuro-oscillatory profiles of static and dynamic music-induced visual imagery. Int J Psychophysiol 2024; 199:112309. [PMID: 38242363 DOI: 10.1016/j.ijpsycho.2024.112309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/22/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Visual imagery, i.e., seeing in the absence of the corresponding retinal input, has been linked to visual and motor processing areas of the brain. Music listening provides an ideal vehicle for exploring the neural correlates of visual imagery because it has been shown to reliably induce a broad variety of content, ranging from abstract shapes to dynamic scenes. Forty-two participants listened with closed eyes to twenty-four excerpts of music, while a 15-channel EEG was recorded, and, after each excerpt, rated the extent to which they experienced static and dynamic visual imagery. Our results show both static and dynamic imagery to be associated with posterior alpha suppression (especially in lower alpha) early in the onset of music listening, while static imagery was associated with an additional alpha enhancement later in the listening experience. With regard to the beta band, our results demonstrate beta enhancement to static imagery, but first beta suppression before enhancement in response to dynamic imagery. We also observed a positive association, early in the listening experience, between gamma power and dynamic imagery ratings that was not present for static imagery ratings. Finally, we offer evidence that musical training may selectively drive effects found with respect to static and dynamic imagery and alpha, beta, and gamma band oscillations. Taken together, our results show the promise of using music listening as an effective stimulus for examining the neural correlates of visual imagery and its contents. Our study also highlights the relevance of future work seeking to study the temporal dynamics of music-induced visual imagery.
Collapse
Affiliation(s)
- Sarah Hashim
- Department of Psychology, Goldsmiths, University of London, United Kingdom.
| | - Mats B Küssner
- Department of Psychology, Goldsmiths, University of London, United Kingdom; Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Germany
| | - André Weinreich
- Department of Psychology, BSP Business & Law School Berlin, Germany
| | - Diana Omigie
- Department of Psychology, Goldsmiths, University of London, United Kingdom
| |
Collapse
|
6
|
Dijkstra N, Mazor M, Fleming SM. Confidence ratings do not distinguish imagination from reality. J Vis 2024; 24:13. [PMID: 38814936 PMCID: PMC11146086 DOI: 10.1167/jov.24.5.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 05/03/2024] [Indexed: 06/01/2024] Open
Abstract
Perceptual reality monitoring refers to the ability to distinguish internally triggered imagination from externally triggered reality. Such monitoring can take place at perceptual or cognitive levels-for example, in lucid dreaming, perceptual experience feels real but is accompanied by a cognitive insight that it is not real. We recently developed a paradigm to reveal perceptual reality monitoring errors during wakefulness in the general population, showing that imagined signals can be erroneously attributed to perception during a perceptual detection task. In the current study, we set out to investigate whether people have insight into perceptual reality monitoring errors by additionally measuring perceptual confidence. We used hierarchical Bayesian modeling of confidence criteria to characterize metacognitive insight into the effects of imagery on detection. Over two experiments, we found that confidence criteria moved in tandem with the decision criterion shift, indicating a failure of reality monitoring not only at a perceptual but also at a metacognitive level. These results further show that such failures have a perceptual rather than a decisional origin. Interestingly, offline queries at the end of the experiment revealed global, task-level insight, which was uncorrelated with local, trial-level insight as measured with confidence ratings. Taken together, our results demonstrate that confidence ratings do not distinguish imagination from reality during perceptual detection. Future research should further explore the different cognitive dimensions of insight into reality judgments and how they are related.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Department of Imaging Neuroscience, University College London, London, UK
- https://sites.google.com/view/nadinedijkstra
| | - Matan Mazor
- All Souls College and Department of Experimental Psychology, University of Oxford, Oxford, UK
- matanmazor.github.io
| | - Stephen M Fleming
- Department of Imaging Neuroscience, University College London, London, UK
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK
- Department of Experimental Psychology, University College London, London, UK
- https://metacoglab.org/
| |
Collapse
|
7
|
Zeman A. Aphantasia and hyperphantasia: exploring imagery vividness extremes. Trends Cogn Sci 2024; 28:467-480. [PMID: 38548492 DOI: 10.1016/j.tics.2024.02.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 02/09/2024] [Accepted: 02/13/2024] [Indexed: 05/12/2024]
Abstract
The vividness of imagery varies between individuals. However, the existence of people in whom conscious, wakeful imagery is markedly reduced, or absent entirely, was neglected by psychology until the recent coinage of 'aphantasia' to describe this phenomenon. 'Hyperphantasia' denotes the converse - imagery whose vividness rivals perceptual experience. Around 1% and 3% of the population experience extreme aphantasia and hyperphantasia, respectively. Aphantasia runs in families, often affects imagery across several sense modalities, and is variably associated with reduced autobiographical memory, face recognition difficulty, and autism. Visual dreaming is often preserved. Subtypes of extreme imagery appear to be likely but are not yet well defined. Initial results suggest that alterations in connectivity between the frontoparietal and visual networks may provide the neural substrate for visual imagery extremes.
Collapse
Affiliation(s)
- Adam Zeman
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK; University of Exeter Medical School, Exeter, UK.
| |
Collapse
|
8
|
Dado T, Papale P, Lozano A, Le L, Wang F, van Gerven M, Roelfsema P, Güçlütürk Y, Güçlü U. Brain2GAN: Feature-disentangled neural encoding and decoding of visual perception in the primate brain. PLoS Comput Biol 2024; 20:e1012058. [PMID: 38709818 PMCID: PMC11098503 DOI: 10.1371/journal.pcbi.1012058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 05/16/2024] [Accepted: 04/08/2024] [Indexed: 05/08/2024] Open
Abstract
A challenging goal of neural coding is to characterize the neural representations underlying visual perception. To this end, multi-unit activity (MUA) of macaque visual cortex was recorded in a passive fixation task upon presentation of faces and natural images. We analyzed the relationship between MUA and latent representations of state-of-the-art deep generative models, including the conventional and feature-disentangled representations of generative adversarial networks (GANs) (i.e., z- and w-latents of StyleGAN, respectively) and language-contrastive representations of latent diffusion networks (i.e., CLIP-latents of Stable Diffusion). A mass univariate neural encoding analysis of the latent representations showed that feature-disentangled w representations outperform both z and CLIP representations in explaining neural responses. Further, w-latent features were found to be positioned at the higher end of the complexity gradient which indicates that they capture visual information relevant to high-level neural activity. Subsequently, a multivariate neural decoding analysis of the feature-disentangled representations resulted in state-of-the-art spatiotemporal reconstructions of visual perception. Taken together, our results not only highlight the important role of feature-disentanglement in shaping high-level neural representations underlying visual perception but also serve as an important benchmark for the future of neural coding.
Collapse
Affiliation(s)
- Thirza Dado
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Paolo Papale
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
| | - Antonio Lozano
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
| | - Lynn Le
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Feng Wang
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
| | - Marcel van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Pieter Roelfsema
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
- Laboratory of Visual Brain Therapy, Sorbonne University, Paris, France
- Department of Integrative Neurophysiology, VU Amsterdam, Amsterdam, Netherlands
- Department of Psychiatry, Amsterdam UMC, Amsterdam, Netherlands
| | - Yağmur Güçlütürk
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Umut Güçlü
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
9
|
Della Vedova G, Proverbio AM. Neural signatures of imaginary motivational states: desire for music, movement and social play. Brain Topogr 2024:10.1007/s10548-024-01047-1. [PMID: 38625520 DOI: 10.1007/s10548-024-01047-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/12/2024] [Indexed: 04/17/2024]
Abstract
The literature has demonstrated the potential for detecting accurate electrical signals that correspond to the will or intention to move, as well as decoding the thoughts of individuals who imagine houses, faces or objects. This investigation examines the presence of precise neural markers of imagined motivational states through the combining of electrophysiological and neuroimaging methods. 20 participants were instructed to vividly imagine the desire to move, listen to music or engage in social activities. Their EEG was recorded from 128 scalp sites and analysed using individual standardized Low-Resolution Brain Electromagnetic Tomographies (LORETAs) in the N400 time window (400-600 ms). The activation of 1056 voxels was examined in relation to the 3 motivational states. The most active dipoles were grouped in eight regions of interest (ROI), including Occipital, Temporal, Fusiform, Premotor, Frontal, OBF/IF, Parietal, and Limbic areas. The statistical analysis revealed that all motivational imaginary states engaged the right hemisphere more than the left hemisphere. Distinct markers were identified for the three motivational states. Specifically, the right temporal area was more relevant for "Social Play", the orbitofrontal/inferior frontal cortex for listening to music, and the left premotor cortex for the "Movement" desire. This outcome is encouraging in terms of the potential use of neural indicators in the realm of brain-computer interface, for interpreting the thoughts and desires of individuals with locked-in syndrome.
Collapse
Affiliation(s)
- Giada Della Vedova
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano, Bicocca, Italy
| | - Alice Mado Proverbio
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano, Bicocca, Italy.
- NeuroMI, Milan Center for Neuroscience, Milan, Italy.
- Department of Psychology of University of Milano-Bicocca, Piazza dell'Ateneo nuovo 1, Milan, 20162, Italy.
| |
Collapse
|
10
|
Proverbio AM, Cesati F. Neural correlates of recalled sadness, joy, and fear states: a source reconstruction EEG study. Front Psychiatry 2024; 15:1357770. [PMID: 38638416 PMCID: PMC11024723 DOI: 10.3389/fpsyt.2024.1357770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 03/18/2024] [Indexed: 04/20/2024] Open
Abstract
Introduction The capacity to understand the others' emotional states, particularly if negative (e.g. sadness or fear), underpins the empathic and social brain. Patients who cannot express their emotional states experience social isolation and loneliness, exacerbating distress. We investigated the feasibility of detecting non-invasive scalp-recorded electrophysiological signals that correspond to recalled emotional states of sadness, fear, and joy for potential classification. Methods The neural activation patterns of 20 healthy and right-handed participants were studied using an electrophysiological technique. Analyses were focused on the N400 component of Event-related potentials (ERPs) recorded during silent recall of subjective emotional states; Standardized weighted Low-resolution Electro-magnetic Tomography (swLORETA) was employed for source reconstruction. The study classified individual patterns of brain activation linked to the recollection of three distinct emotional states into seven regions of interest (ROIs). Results Statistical analysis (ANOVA) of the individual magnitude values revealed the existence of a common emotional circuit, as well as distinct brain areas that were specifically active during recalled sad, happy and fearful states. In particular, the right temporal and left superior frontal areas were more active for sadness, the left limbic region for fear, and the right orbitofrontal cortex for happy affective states. Discussion In conclusion, this study successfully demonstrated the feasibility of detecting scalp-recorded electrophysiological signals corresponding to internal and subjective affective states. These findings contribute to our understanding of the emotional brain, and have potential applications for future BCI classification and identification of emotional states in LIS patients who may be unable to express their emotions, thus helping to alleviate social isolation and sense of loneliness.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Cognitive Electrophysiology Lab, Department of Psychology, University of Milano-Bicocca, Milan, Italy
- NEURO-MI Milan Center for Neuroscience, Milan, Italy
| | - Federico Cesati
- Cognitive Electrophysiology Lab, Department of Psychology, University of Milano-Bicocca, Milan, Italy
| |
Collapse
|
11
|
Wang J, Shen S, Becker B, Hei Lam Tsang M, Mei Y, Wikgren J, Lei Y. Neurocognitive mechanisms of mental imagery-based disgust learning. Behav Res Ther 2024; 175:104502. [PMID: 38402674 DOI: 10.1016/j.brat.2024.104502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 02/18/2024] [Accepted: 02/21/2024] [Indexed: 02/27/2024]
Abstract
Disgust imagery represents a potential pathological mechanism for disgust-related disorders. However, it remains controversial as to whether disgust can be conditioned with disgust-evoking mental imagery serving as the unconditioned stimulus (US). Therefore, we examined this using a conditioned learning paradigm in combination with event-related potential (ERP) analysis in 35 healthy college students. The results indicated that the initial neutral face (conditioned stimulus, CS+) became more disgust-evoking, unpleasant, and arousing after pairing with disgust-evoking imagery (disgust CS+), compared to pairing with neutral (neutral CS+) and no (CS-) imagery. Moreover, we observed that mental imagery-based disgust conditioning was resistant to extinction. While the disgust CS + evoked larger P3 and late positive potential amplitudes than CS- during acquisition, no significant differences were found between disgust CS+ and neutral CS+, indicating a dissociation between self-reported and neurophysiological responses. Future studies may additionally acquire facial EMG as an implicit index of conditioned disgust. This study provides the first neurobiological evidence that associative disgust learning can occur without aversive physical stimuli, with implications for understanding how disgust-related disorders may manifest or deteriorate without external perceptual aversive experiences, such as in obsessive-compulsive disorder (OCD).
Collapse
Affiliation(s)
- Jinxia Wang
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, 610066, China; Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyvaskyla, Jyvaskyla, Finland
| | - Siyi Shen
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, 610066, China
| | - Benjamin Becker
- State Key Laboratory of Brain and Cognitive Sciences, Department of Psychology, The University of Hong Kong, Hong Kong, China
| | - Michelle Hei Lam Tsang
- State Key Laboratory of Brain and Cognitive Sciences, Department of Psychology, The University of Hong Kong, Hong Kong, China
| | - Ying Mei
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, 610066, China; Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyvaskyla, Jyvaskyla, Finland
| | - Jan Wikgren
- Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyvaskyla, Jyvaskyla, Finland
| | - Yi Lei
- Institute for Brain and Psychological Sciences, Sichuan Normal University, Chengdu, 610066, China.
| |
Collapse
|
12
|
Krempel R, Monzel M. Aphantasia and involuntary imagery. Conscious Cogn 2024; 120:103679. [PMID: 38564857 DOI: 10.1016/j.concog.2024.103679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 03/06/2024] [Accepted: 03/09/2024] [Indexed: 04/04/2024]
Abstract
Aphantasia is a condition that is often characterized as the impaired ability to create voluntary mental images. Aphantasia is assumed to selectively affect voluntary imagery mainly because even though aphantasics report being unable to visualize something at will, many report having visual dreams. We argue that this common characterization of aphantasia is incorrect. Studies on aphantasia are often not clear about whether they are assessing voluntary or involuntary imagery, but some studies show that several forms of involuntary imagery are also affected in aphantasia (including imagery in dreams). We also raise problems for two attempts to show that involuntary images are preserved in aphantasia. In addition, we report the results of a study about afterimages in aphantasia, which suggest that these tend to be less intense in aphantasics than in controls. Involuntary imagery is often treated as a unitary kind that is either present or absent in aphantasia. We suggest that this approach is mistaken and that we should look at different types of involuntary imagery case by case. Doing so reveals no evidence of preserved involuntary imagery in aphantasia. We suggest that a broader characterization of aphantasia, as a deficit in forming mental imagery, whether voluntary or not, is more appropriate. Characterizing aphantasia as a volitional deficit is likely to lead researchers to give incorrect explanations for aphantasia, and to look for the wrong mechanisms underlying it.
Collapse
Affiliation(s)
- Raquel Krempel
- Center for Logic, Epistemology and History of Science, State University of Campinas, R. Sérgio Buarque de Holanda, 251 - Cidade Universitária, Campinas, SP 13083-859, Brazil; Center for Philosophy of Science, University of Pittsburgh, 4200 Fifth Ave, Pittsburgh, PA 15260, USA.
| | - Merlin Monzel
- Department of Psychology, Personality Psychology and Biological Psychology, University of Bonn, Kaiser-Karl-Ring 9, 53111 Bonn, Germany.
| |
Collapse
|
13
|
Gu J, Deng K, Luo X, Ma W, Tang X. Investigating the different mechanisms in related neural activities: a focus on auditory perception and imagery. Cereb Cortex 2024; 34:bhae139. [PMID: 38629796 DOI: 10.1093/cercor/bhae139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/17/2024] [Accepted: 03/20/2024] [Indexed: 04/19/2024] Open
Abstract
Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.
Collapse
Affiliation(s)
- Jin Gu
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
- Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Kexin Deng
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Xiaoqi Luo
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Wanli Ma
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Xuegang Tang
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| |
Collapse
|
14
|
Contemori G, Oletto CM, Battaglini L, Bertamini M. On the relationship between foveal mask interference and mental imagery in peripheral object recognition. Proc Biol Sci 2024; 291:20232867. [PMID: 38471562 DOI: 10.1098/rspb.2023.2867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 02/02/2024] [Indexed: 03/14/2024] Open
Abstract
A delayed foveal mask affects perception of peripheral stimuli. The effect is determined by the timing of the mask and by the similarity with the peripheral stimulus. A congruent mask enhances performance, while an incongruent one impairs it. It is hypothesized that foveal masks disrupt a feedback mechanism reaching the foveal cortex. This mechanism could be part of a broader circuit associated with mental imagery, but this hypothesis has not as yet been tested. We investigated the link between mental imagery and foveal feedback. We tested the relationship between performance fluctuations caused by the foveal mask-measured in terms of discriminability (d') and criterion (C)-and the scores from two questionnaires designed to assess mental imagery vividness (VVIQ) and another exploring object imagery, spatial imagery and verbal cognitive styles (OSIVQ). Contrary to our hypotheses, no significant correlations were found between VVIQ and the mask's impact on d' and C. Neither the object nor spatial subscales of OSIVQ correlated with the mask's impact. In conclusion, our findings do not substantiate the existence of a link between foveal feedback and mental imagery. Further investigation is needed to determine whether mask interference might occur with more implicit measures of imagery.
Collapse
Affiliation(s)
- Giulio Contemori
- Department of General Psychology, University of Padova, Padova, Italy
| | | | - Luca Battaglini
- Department of General Psychology, University of Padova, Padova, Italy
| | - Marco Bertamini
- Department of General Psychology, University of Padova, Padova, Italy
| |
Collapse
|
15
|
Bi Z, Li H, Tian L. Top-down generation of low-resolution representations improves visual perception and imagination. Neural Netw 2024; 171:440-456. [PMID: 38150870 DOI: 10.1016/j.neunet.2023.12.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 11/30/2023] [Accepted: 12/18/2023] [Indexed: 12/29/2023]
Abstract
Perception or imagination requires top-down signals from high-level cortex to primary visual cortex (V1) to reconstruct or simulate the representations bottom-up stimulated by the seen images. Interestingly, top-down signals in V1 have lower spatial resolution than bottom-up representations. It is unclear why the brain uses low-resolution signals to reconstruct or simulate high-resolution representations. By modeling the top-down pathway of the visual system using the decoder of a variational auto-encoder (VAE), we reveal that low-resolution top-down signals can better reconstruct or simulate the information contained in the sparse activities of V1 simple cells, which facilitates perception and imagination. This advantage of low-resolution generation is related to facilitating high-level cortex to form geometry-respecting representations observed in experiments. Furthermore, we present two findings regarding this phenomenon in the context of AI-generated sketches, a style of drawings made of lines. First, we found that the quality of the generated sketches critically depends on the thickness of the lines in the sketches: thin-line sketches are harder to generate than thick-line sketches. Second, we propose a technique to generate high-quality thin-line sketches: instead of directly using original thin-line sketches, we use blurred sketches to train VAE or GAN (generative adversarial network), and then infer the thin-line sketches from the VAE- or GAN-generated blurred sketches. Collectively, our work suggests that low-resolution top-down generation is a strategy the brain uses to improve visual perception and imagination, which inspires new sketch-generation AI techniques.
Collapse
Affiliation(s)
- Zedong Bi
- Lingang Laboratory, Shanghai 200031, China.
| | - Haoran Li
- Department of Physics, Hong Kong Baptist University, Hong Kong, China
| | - Liang Tian
- Department of Physics, Hong Kong Baptist University, Hong Kong, China; Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China; Institute of Systems Medicine and Health Sciences, Hong Kong Baptist University, Hong Kong, China; State Key Laboratory of Environmental and Biological Analysis, Hong Kong Baptist University, Hong Kong, China.
| |
Collapse
|
16
|
Weber S, Christophel T, Görgen K, Soch J, Haynes J. Working memory signals in early visual cortex are present in weak and strong imagers. Hum Brain Mapp 2024; 45:e26590. [PMID: 38401134 PMCID: PMC10893972 DOI: 10.1002/hbm.26590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 12/06/2023] [Accepted: 12/29/2023] [Indexed: 02/26/2024] Open
Abstract
It has been suggested that visual images are memorized across brief periods of time by vividly imagining them as if they were still there. In line with this, the contents of both working memory and visual imagery are known to be encoded already in early visual cortex. If these signals in early visual areas were indeed to reflect a combined imagery and memory code, one would predict them to be weaker for individuals with reduced visual imagery vividness. Here, we systematically investigated this question in two groups of participants. Strong and weak imagers were asked to remember images across brief delay periods. We were able to reliably reconstruct the memorized stimuli from early visual cortex during the delay. Importantly, in contrast to the prediction, the quality of reconstruction was equally accurate for both strong and weak imagers. The decodable information also closely reflected behavioral precision in both groups, suggesting it could contribute to behavioral performance, even in the extreme case of completely aphantasic individuals. Our data thus suggest that working memory signals in early visual cortex can be present even in the (near) absence of phenomenal imagery.
Collapse
Affiliation(s)
- Simon Weber
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Research Training Group “Extrospection” and Berlin School of Mind and Brain, Humboldt‐Universität zu BerlinBerlinGermany
- Research Cluster of Excellence “Science of Intelligence”Technische Universität BerlinBerlinGermany
| | - Thomas Christophel
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Department of PsychologyHumboldt‐Universität zu BerlinBerlinGermany
| | - Kai Görgen
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Research Cluster of Excellence “Science of Intelligence”Technische Universität BerlinBerlinGermany
| | - Joram Soch
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Institute of Psychology, Otto von Guericke University MagdeburgMagdeburgGermany
| | - John‐Dylan Haynes
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Research Training Group “Extrospection” and Berlin School of Mind and Brain, Humboldt‐Universität zu BerlinBerlinGermany
- Research Cluster of Excellence “Science of Intelligence”Technische Universität BerlinBerlinGermany
- Department of PsychologyHumboldt‐Universität zu BerlinBerlinGermany
- Collaborative Research Center “Volition and Cognitive Control”Technische Universität DresdenDresdenGermany
| |
Collapse
|
17
|
Koide-Majima N, Nishimoto S, Majima K. Mental image reconstruction from human brain activity: Neural decoding of mental imagery via deep neural network-based Bayesian estimation. Neural Netw 2024; 170:349-363. [PMID: 38016230 DOI: 10.1016/j.neunet.2023.11.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 09/22/2023] [Accepted: 11/08/2023] [Indexed: 11/30/2023]
Abstract
Visual images observed by humans can be reconstructed from their brain activity. However, the visualization (externalization) of mental imagery is challenging. Only a few studies have reported successful visualization of mental imagery, and their visualizable images have been limited to specific domains such as human faces or alphabetical letters. Therefore, visualizing mental imagery for arbitrary natural images stands as a significant milestone. In this study, we achieved this by enhancing a previous method. Specifically, we demonstrated that the visual image reconstruction method proposed in the seminal study by Shen et al. (2019) heavily relied on low-level visual information decoded from the brain and could not efficiently utilize the semantic information that would be recruited during mental imagery. To address this limitation, we extended the previous method to a Bayesian estimation framework and introduced the assistance of semantic information into it. Our proposed framework successfully reconstructed both seen images (i.e., those observed by the human eye) and imagined images from brain activity. Quantitative evaluation showed that our framework could identify seen and imagined images highly accurately compared to the chance accuracy (seen: 90.7%, imagery: 75.6%, chance accuracy: 50.0%). In contrast, the previous method could only identify seen images (seen: 64.3%, imagery: 50.4%). These results suggest that our framework would provide a unique tool for directly investigating the subjective contents of the brain such as illusions, hallucinations, and dreams.
Collapse
Affiliation(s)
- Naoko Koide-Majima
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka 565-0871, Japan; Graduate School of Frontier Biosciences, Osaka University, Osaka 565-0871, Japan
| | - Shinji Nishimoto
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka 565-0871, Japan; Graduate School of Frontier Biosciences, Osaka University, Osaka 565-0871, Japan; Graduate School of Medicine, Osaka University, Osaka 565-0871, Japan
| | - Kei Majima
- Institute for Quantum Life Science, National Institutes for Quantum Science and Technology, Chiba 263-8555, Japan; JST PRESTO, Saitama 332-0012, Japan.
| |
Collapse
|
18
|
Shenyan O, Lisi M, Greenwood JA, Skipper JI, Dekker TM. Visual hallucinations induced by Ganzflicker and Ganzfeld differ in frequency, complexity, and content. Sci Rep 2024; 14:2353. [PMID: 38287084 PMCID: PMC10825158 DOI: 10.1038/s41598-024-52372-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 01/17/2024] [Indexed: 01/31/2024] Open
Abstract
Visual hallucinations can be phenomenologically divided into those of a simple or complex nature. Both simple and complex hallucinations can occur in pathological and non-pathological states, and can also be induced experimentally by visual stimulation or deprivation-for example using a high-frequency, eyes-open flicker (Ganzflicker) and perceptual deprivation (Ganzfeld). Here we leverage the differences in visual stimulation that these two techniques involve to investigate the role of bottom-up and top-down processes in shifting the complexity of visual hallucinations, and to assess whether these techniques involve a shared underlying hallucinatory mechanism despite their differences. For each technique, we measured the frequency and complexity of the hallucinations produced, utilising button presses, retrospective drawing, interviews, and questionnaires. For both experimental techniques, simple hallucinations were more common than complex hallucinations. Crucially, we found that Ganzflicker was more effective than Ganzfeld at eliciting simple hallucinations, while complex hallucinations remained equivalent across the two conditions. As a result, the likelihood that an experienced hallucination was complex was higher during Ganzfeld. Despite these differences, we found a correlation between the frequency and total time spent hallucinating in Ganzflicker and Ganzfeld conditions, suggesting some shared mechanisms between the two methodologies. We attribute the tendency to experience frequent simple hallucinations in both conditions to a shared low-level core hallucinatory mechanism, such as excitability of visual cortex, potentially amplified in Ganzflicker compared to Ganzfeld due to heightened bottom-up input. The tendency to experience complex hallucinations, in contrast, may be related to top-down processes less affected by visual stimulation.
Collapse
Affiliation(s)
- Oris Shenyan
- Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK.
- Institute of Ophthalmology, University College London, London, UK.
| | - Matteo Lisi
- Department of Psychology, Royal Holloway University, London, UK
| | - John A Greenwood
- Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK
| | - Jeremy I Skipper
- Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK
| | - Tessa M Dekker
- Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK
- Institute of Ophthalmology, University College London, London, UK
| |
Collapse
|
19
|
Powell A, Sumnall H, Smith J, Kuiper R, Montgomery C. Recovery of neuropsychological function following abstinence from alcohol in adults diagnosed with an alcohol use disorder: Systematic review of longitudinal studies. PLoS One 2024; 19:e0296043. [PMID: 38166127 PMCID: PMC10760842 DOI: 10.1371/journal.pone.0296043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/05/2023] [Indexed: 01/04/2024] Open
Abstract
BACKGROUND Alcohol use disorders (AUD) associate with structural and functional brain differences, including impairments in neuropsychological function; however, reviews (mostly cross-sectional) are inconsistent with regards to recovery of such functions following abstinence. Recovery is important, as these impairments associate with treatment outcomes and quality of life. OBJECTIVE(S) To assess neuropsychological function recovery following abstinence in individuals with a clinical AUD diagnosis. The secondary objective was to assess predictors of neuropsychological recovery in AUD. METHODS Following the preregistered protocol (PROSPERO: CRD42022308686), APA PsycInfo, EBSCO MEDLINE, CINAHL, and Web of Science Core Collection were searched between 1999-2022. Study reporting follows the Joanna Briggs Institute (JBI) Manual for Evidence Synthesis, study quality was assessed using the JBI Checklist for Cohort Studies. Eligible studies were those with a longitudinal design that assessed neuropsychological recovery following abstinence from alcohol in adults with a clinical diagnosis of AUD. Studies were excluded if participant group was defined by another or co-morbid condition/injury, or by relapse. Recovery was defined as function reaching 'normal' performance. RESULTS Sixteen studies (AUD n = 783, controls n = 390) were selected for narrative synthesis. Most functions demonstrated recovery within 6-12 months, including sub-domains within attention, executive function, perception, and memory, though basic processing speed and working memory updating/tracking recovered earlier. Additionally, verbal fluency was not impaired at baseline (while verbal function was not assessed compared to normal levels), and concept formation and reasoning recovery was inconsistent. CONCLUSIONS These results provide evidence that recovery of most functions is possible. While overall robustness of results was good, methodological limitations included lack of control groups, additional methods to self-report to confirm abstinence, description/control for attrition, statistical control of confounds, and of long enough study durations to capture change.
Collapse
Affiliation(s)
- Anna Powell
- School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, United Kingdom
- Liverpool Centre for Alcohol Research, University of Liverpool, Liverpool, United Kingdom
| | - Harry Sumnall
- Liverpool Centre for Alcohol Research, University of Liverpool, Liverpool, United Kingdom
- Public Health Institute, Faculty of Health, Liverpool John Moores University, Liverpool, United Kingdom
| | - Jessica Smith
- Liverpool Centre for Alcohol Research, University of Liverpool, Liverpool, United Kingdom
- Public Health Institute, Faculty of Health, Liverpool John Moores University, Liverpool, United Kingdom
| | - Rebecca Kuiper
- School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, United Kingdom
- Liverpool Centre for Alcohol Research, University of Liverpool, Liverpool, United Kingdom
| | - Catharine Montgomery
- School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, United Kingdom
- Liverpool Centre for Alcohol Research, University of Liverpool, Liverpool, United Kingdom
| |
Collapse
|
20
|
Moriya J. Visual mental imagery of atypical color objects attracts attention to an imagery-matching object. Atten Percept Psychophys 2024; 86:49-61. [PMID: 37872433 DOI: 10.3758/s13414-023-02804-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/01/2023] [Indexed: 10/25/2023]
Abstract
Mental imagery attracts attention to imagery-matching stimuli. However, it remains unknown whether voluntarily imagined atypical color also attracts attention to a stimulus that matches the imagery when the imagined stimuli are color-diagnostic objects, which are strongly associated with typical color. This study investigated whether people can voluntarily imagine atypical colors of such objects and attend to imagery-matching stimuli. Participants in the imagery group were instructed to imagine an atypical color of the black-white objects according to the instructed color or voluntarily selected color, whereas participants in the control group were instructed to attend to the objects without any instruction of imagery. Thereafter, they detected a color target in a visual search task. Results revealed that participants in the imagery group directed attention to the imagery-matching atypical color, not to the original color of the object in the search. Meanwhile, participants in the control group did not demonstrate any attentional guidance. These results suggest that voluntarily imagining atypical color can attenuate mental representations of the original color imagery and change attention to a stimulus that matches imagery.
Collapse
Affiliation(s)
- Jun Moriya
- Faculty of Sociology, Kansai University, 3-3-35 Yamate-cho, Suita-shi, Osaka, Japan.
| |
Collapse
|
21
|
Yan Y, Zhan J, Garrod O, Cui X, Ince RAA, Schyns PG. Strength of predicted information content in the brain biases decision behavior. Curr Biol 2023; 33:5505-5514.e6. [PMID: 38065096 DOI: 10.1016/j.cub.2023.10.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/11/2023] [Accepted: 10/23/2023] [Indexed: 12/21/2023]
Abstract
Prediction-for-perception theories suggest that the brain predicts incoming stimuli to facilitate their categorization.1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 However, it remains unknown what the information contents of these predictions are, which hinders mechanistic explanations. This is because typical approaches cast predictions as an underconstrained contrast between two categories18,19,20,21,22,23,24-e.g., faces versus cars, which could lead to predictions of features specific to faces or cars, or features from both categories. Here, to pinpoint the information contents of predictions and thus their mechanistic processing in the brain, we identified the features that enable two different categorical perceptions of the same stimuli. We then trained multivariate classifiers to discern, from dynamic MEG brain responses, the features tied to each perception. With an auditory cueing design, we reveal where, when, and how the brain reactivates visual category features (versus the typical category contrast) before the stimulus is shown. We demonstrate that the predictions of category features have a more direct influence (bias) on subsequent decision behavior in participants than the typical category contrast. Specifically, these predictions are more precisely localized in the brain (lateralized), are more specifically driven by the auditory cues, and their reactivation strength before a stimulus presentation exerts a greater bias on how the individual participant later categorizes this stimulus. By characterizing the specific information contents that the brain predicts and then processes, our findings provide new insights into the brain's mechanisms of prediction for perception.
Collapse
Affiliation(s)
- Yuening Yan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Jiayu Zhan
- School of Psychological and Cognitive Sciences, Peking University, 5 Yiheyuan Road, Beijing 100871, China
| | - Oliver Garrod
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Xuan Cui
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
22
|
Cabbai G, Brown CRH, Dance C, Simner J, Forster S. Mental imagery and visual attentional templates: A dissociation. Cortex 2023; 169:259-278. [PMID: 37967476 DOI: 10.1016/j.cortex.2023.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 08/10/2023] [Accepted: 09/26/2023] [Indexed: 11/17/2023]
Abstract
There is a growing interest in the relationship between mental images and attentional templates as both are considered pictorial representations that involve similar neural mechanisms. Here, we investigated the role of mental imagery in the automatic implementation of attentional templates and their effect on involuntary attention. We developed a novel version of the contingent capture paradigm designed to encourage the generation of a new template on each trial and measure contingent spatial capture by a template-matching visual feature (color). Participants were required to search at four different locations for a specific object indicated at the start of each trial. Immediately prior to the search display, color cues were presented surrounding the potential target locations, one of which matched the target color (e.g., red for strawberry). Across three experiments, our task induced a robust contingent capture effect, reflected by faster responses when the target appeared in the location previously occupied by the target-matching cue. Contrary to our predictions, this effect remained consistent regardless of self-reported individual differences in visual mental imagery (Experiment 1, N = 216) or trial-by-trial variation of voluntary imagery vividness (Experiment 2, N = 121). Moreover, contingent capture was observed even among aphantasic participants, who report no imagery (Experiment 3, N = 91). The magnitude of the effect was not reduced in aphantasics compared to a control sample of non-aphantasics, although the two groups reported substantial differences in their search strategy and exhibited differences in overall speed and accuracy. Our results hence establish a dissociation between the generation and implementation of attentional templates for a visual feature (color) and subjectively experienced imagery.
Collapse
Affiliation(s)
- Giulia Cabbai
- School of Psychology, University of Sussex, Brighton, United Kingdom; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom.
| | | | - Carla Dance
- School of Psychology, University of Sussex, Brighton, United Kingdom
| | - Julia Simner
- School of Psychology, University of Sussex, Brighton, United Kingdom; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom
| | - Sophie Forster
- School of Psychology, University of Sussex, Brighton, United Kingdom; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom
| |
Collapse
|
23
|
Pratts J, Pobric G, Yao B. Bridging phenomenology and neural mechanisms of inner speech: ALE meta-analysis on egocentricity and spontaneity in a dual-mechanistic framework. Neuroimage 2023; 282:120399. [PMID: 37827205 DOI: 10.1016/j.neuroimage.2023.120399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 09/25/2023] [Accepted: 09/29/2023] [Indexed: 10/14/2023] Open
Abstract
The neural mechanisms of inner speech remain unclear despite its importance in a variety of cognitive processes and its implication in aberrant perceptions such as auditory verbal hallucinations. Previous research has proposed a corollary discharge model in which inner speech is a truncated form of overt speech, relying on speech production-related regions (e.g. left inferior frontal gyrus). This model does not fully capture the diverse phenomenology of inner speech and recent research suggesting alternative perception-related mechanisms of generation. Therefore, we present and test a framework in which inner speech can be generated by two separate mechanisms, depending on its phenomenological qualities: a corollary discharge mechanism relying on speech production regions and a perceptual simulation mechanism within speech perceptual regions. The results of the activation likelihood estimation meta-analysis examining inner speech studies support the idea that varieties of inner speech recruit different neural mechanisms.
Collapse
Affiliation(s)
- Jaydan Pratts
- Division of Psychology, Communication and Human Neuroscience, School of Health Sciences, University of Manchester, UK
| | - Gorana Pobric
- Division of Psychology, Communication and Human Neuroscience, School of Health Sciences, University of Manchester, UK
| | - Bo Yao
- Division of Psychology, Communication and Human Neuroscience, School of Health Sciences, University of Manchester, UK; Department of Psychology, Fylde College, Lancaster University, UK.
| |
Collapse
|
24
|
Pace T, Koenig-Robert R, Pearson J. Different Mechanisms for Supporting Mental Imagery and Perceptual Representations: Modulation Versus Excitation. Psychol Sci 2023; 34:1229-1243. [PMID: 37782827 DOI: 10.1177/09567976231198435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023] Open
Abstract
Recent research suggests imagery is functionally equivalent to a weak form of visual perception. Here we report evidence across five independent experiments on adults that perception and imagery are supported by fundamentally different mechanisms: Whereas perceptual representations are largely formed via increases in excitatory activity, imagery representations are largely supported by modulating nonimagined content. We developed two behavioral techniques that allowed us to first put the visual system into a state of adaptation and then probe the additivity of perception and imagery. If imagery drives similar excitatory visual activity to perception, pairing imagery with perceptual adapters should increase the state of adaptation. Whereas pairing weak perception with adapters increased measures of adaptation, pairing imagery reversed their effects. Further experiments demonstrated that these nonadditive effects were due to imagery weakening representations of nonimagined content. Together these data provide empirical evidence that the brain uses categorically different mechanisms to represent imagery and perception.
Collapse
Affiliation(s)
- Thomas Pace
- School of Psychology, University of New South Wales
| | | | - Joel Pearson
- School of Psychology, University of New South Wales
| |
Collapse
|
25
|
Cochrane BA, Uy R, Milliken B, Sun HJ. Imagined object files: Visual imagery produces partial repetition costs where perception does not. Atten Percept Psychophys 2023; 85:2588-2597. [PMID: 37258894 DOI: 10.3758/s13414-023-02733-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2023] [Indexed: 06/02/2023]
Abstract
The present study explored whether object (or event) files can be formed that integrate color imagery and perceptual location features. To assess this issue, a cue-target procedure was used whereby color imagery was cued to be generated at a particular location in space, which was then followed by a perceptual color discrimination task. Partial repetition costs (PRCs) were then measured by varying the overlap of the color and location features of the cue and target to evaluate whether an object/event file was formed. Robust PRCs were observed when imagery was generated at a location, supporting the idea that imagery and perception can be incorporated into a common event file. It was also revealed that the PRC effects for perceptual color cues were tenuous-they did not reach significance in the present study. Overall, the present study indicates that imagery can produce stronger binding effects than perception, offering important insights into the role that active engagement plays in the formation of object/event files.
Collapse
Affiliation(s)
| | - Rocelyn Uy
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Canada
| | - Bruce Milliken
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Canada
| | - Hong-Jin Sun
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Canada
| |
Collapse
|
26
|
Ibáñez A, Kühne K, Miklashevsky A, Monaco E, Muraki E, Ranzini M, Speed LJ, Tuena C. Ecological Meanings: A Consensus Paper on Individual Differences and Contextual Influences in Embodied Language. J Cogn 2023; 6:59. [PMID: 37841670 PMCID: PMC10573819 DOI: 10.5334/joc.228] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/20/2022] [Indexed: 10/17/2023] Open
Abstract
Embodied theories of cognition consider many aspects of language and other cognitive domains as the result of sensory and motor processes. In this view, the appraisal and the use of concepts are based on mechanisms of simulation grounded on prior sensorimotor experiences. Even though these theories continue receiving attention and support, increasing evidence indicates the need to consider the flexible nature of the simulation process, and to accordingly refine embodied accounts. In this consensus paper, we discuss two potential sources of variability in experimental studies on embodiment of language: individual differences and context. Specifically, we show how factors contributing to individual differences may explain inconsistent findings in embodied language phenomena. These factors include sensorimotor or cultural experiences, imagery, context-related factors, and cognitive strategies. We also analyze the different contextual modulations, from single words to sentences and narratives, as well as the top-down and bottom-up influences. Similarly, we review recent efforts to include cultural and language diversity, aging, neurodegenerative diseases, and brain disorders, as well as bilingual evidence into the embodiment framework. We address the importance of considering individual differences and context in clinical studies to drive translational research more efficiently, and we indicate recommendations on how to correctly address these issues in future research. Systematically investigating individual differences and context may contribute to understanding the dynamic nature of simulation in language processes, refining embodied theories of cognition, and ultimately filling the gap between cognition in artificial experimental settings and cognition in the wild (i.e., in everyday life).
Collapse
Affiliation(s)
- Agustín Ibáñez
- Latin American Brain Health Institute (BrainLat), Universidad Adolfo Ibáñez, Santiago de Chile, Chile
- Cognitive Neuroscience Center (CNC), Universidad de San Andrés and CONICET, Buenos Aires, Argentina
- Global Brain Health Institute (GBHI), University of California San Francisco (UCSF), California, US
- Trinity College Dublin (TCD), Dublin, Ireland, IE
| | - Katharina Kühne
- Potsdam Embodied Cognition Group, Cognitive Sciences, University of Potsdam, Potsdam, DE
| | - Alex Miklashevsky
- Potsdam Embodied Cognition Group, Cognitive Sciences, University of Potsdam, Potsdam, DE
| | - Elisa Monaco
- Laboratory for Cognitive and Neurological Sciences, Department of Neuroscience and Movement Science, Faculty of Science and Medicine, University of Fribourg, CH
| | - Emiko Muraki
- Department of Psychology & Hotchkiss Brain Institute, University of Calgary, CA
| | | | | | - Cosimo Tuena
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, IT
| |
Collapse
|
27
|
Moyal R, Bhamani C, Edelman S. Revisiting the effects of configuration, predictability, and relevance on visual detection during interocular suppression. Cognition 2023; 238:105506. [PMID: 37300930 DOI: 10.1016/j.cognition.2023.105506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 05/17/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023]
Abstract
Statistical regularities and predictions can influence the earliest stages of visual processing. Studies examining their effects on detection, however, have yielded inconsistent results. In continuous flash suppression (CFS), where a static image projected to one eye is suppressed by a dynamic image presented to the other, the predictability of the suppressed signal may facilitate or delay detection. To identify the factors that differentiate these outcomes and dissociate the effects of expectation from those of behavioral relevance, we conducted three CFS experiments that addressed confounds related to the use of reaction time measures and complex images. In experiment 1, orientation recognition performance and visibility rates increased when a suppressed line segment completed a partial shape surrounding the CFS patch, demonstrating that valid configuration cues facilitate detection. In Experiment 2, however, predictive cues marginally affected visibility and did not modulate localization performance, challenging existing findings. In experiment 3, a relevance manipulation was introduced; participants pressed a key upon detecting lines of a particular orientation, ignoring the other possible orientation. Visibility and localization were enhanced for relevant orientations. Predictive cues modulated visibility, orientation recognition sensitivity, and response latencies, but not localization-an objective measure sensitive to partial breakthrough. Thus, while a consistent surround can strongly enhance detection during passive observation, predictive cueing primarily affects post-detection factors such as response readiness and recognition confidence. Relevance and predictability did not interact, suggesting that the contributions of these two processes to detection are mostly orthogonal.
Collapse
Affiliation(s)
- Roy Moyal
- Department of Psychology & Cognitive Science Program, Cornell University, Ithaca, NY, United States of America.
| | - Conrad Bhamani
- Department of Psychology & Cognitive Science Program, Cornell University, Ithaca, NY, United States of America
| | - Shimon Edelman
- Department of Psychology & Cognitive Science Program, Cornell University, Ithaca, NY, United States of America
| |
Collapse
|
28
|
Sulfaro AA, Robinson AK, Carlson TA. Modelling perception as a hierarchical competition differentiates imagined, veridical, and hallucinated percepts. Neurosci Conscious 2023; 2023:niad018. [PMID: 37621984 PMCID: PMC10445666 DOI: 10.1093/nc/niad018] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 08/26/2023] Open
Abstract
Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally generated stimuli (i.e. sensory input) and internally generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate between veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume that low-level stimulus information (e.g. in early visual cortices) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally generated stimuli may be hallucinated when sensory input is dampened or eradicated. Our approach can explain individual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Collapse
Affiliation(s)
- Alexander A Sulfaro
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| | - Amanda K Robinson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
- Queensland Brain Institute, QBI Building 79, The University of Queensland, St Lucia, QLD 4067, Australia
| | - Thomas A Carlson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
29
|
Xu Z, Zhai Y, Kang Y. Mutual information measure of visual perception based on noisy spiking neural networks. Front Neurosci 2023; 17:1155362. [PMID: 37655008 PMCID: PMC10467273 DOI: 10.3389/fnins.2023.1155362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 06/06/2023] [Indexed: 09/02/2023] Open
Abstract
Note that images of low-illumination are weak aperiodic signals, while mutual information can be used as an effective measure for the shared information between the input stimulus and the output response of nonlinear systems, thus it is possible to develop novel visual perception algorithm based on the principle of aperiodic stochastic resonance within the frame of information theory. To confirm this, we reveal this phenomenon using the integrate-and-fire neural networks of neurons with noisy binary random signal as input first. And then, we propose an improved visual perception algorithm with the image mutual information as assessment index. The numerical experiences show that the target image can be picked up with more easiness by the maximal mutual information than by the minimum of natural image quality evaluation (NIQE), which is one of the most frequently used indexes. Moreover, the advantage of choosing quantile as spike threshold has also been confirmed. The improvement of this research should provide large convenience for potential applications including video tracking in environments of low illumination.
Collapse
Affiliation(s)
| | | | - Yanmei Kang
- School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, China
| |
Collapse
|
30
|
Leoni J, Strada SC, Tanelli M, Proverbio AM. MIRACLE: MInd ReAding CLassification Engine. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3212-3222. [PMID: 37535483 DOI: 10.1109/tnsre.2023.3301507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/05/2023]
Abstract
Brain-computer interfaces (BCIs) have revolutionized the way humans interact with machines, particularly for patients with severe motor impairments. EEG-based BCIs have limited functionality due to the restricted pool of stimuli that they can distinguish, while those elaborating event-related potentials up to now employ paradigms that require the patient's perception of the eliciting stimulus. In this work, we propose MIRACLE: a novel BCI system that combines functional data analysis and machine-learning techniques to decode patients' minds from the elicited potentials. MIRACLE relies on a hierarchical ensemble classifier recognizing 10 different semantic categories of imagined stimuli. We validated MIRACLE on an extensive dataset collected from 20 volunteers, with both imagined and perceived stimuli, to compare the system performance on the two. Furthermore, we quantify the importance of each EEG channel in the decision-making process of the classifier, which can help reduce the number of electrodes required for data acquisition, enhancing patients' comfort.
Collapse
|
31
|
Liao MR, Grindell JD, Anderson BA. A comparison of mental imagery and perceptual cueing across domains of attention. Atten Percept Psychophys 2023; 85:1834-1845. [PMID: 37349626 DOI: 10.3758/s13414-023-02747-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/10/2023] [Indexed: 06/24/2023]
Abstract
Mental imagery and perceptual cues can influence subsequent visual search performance, but examination of this influence has been limited to low-level features like colors and shapes. The present study investigated how the two types of cues influence low-level visual search, visual search with realistic objects, and executive attention. On each trial, participants were either presented with a colored square or tasked with using mental imagery to generate a colored square that could match the target (valid trial) or distractor (invalid trial) in the search array that followed (Experiments 1 and 3). In a separate experiment, the colored square displayed or generated was replaced with a realistic object in a specific category that could appear as a target or distractor in the search array (Experiment 2). Although the displayed object was in the same category as an item in the search display, they were never a perfect match (e.g., jam drop cookie instead of chocolate chip). Our findings revealed that the facilitation of performance on valid trials compared with invalid trials was greater for perceptual cues than imagery cues for low-level features (Experiment 1), whereas the influence of these two types of cues was comparable in the context of realistic objects (Experiment 2) The influence of mental imagery appears not to extend to the resolution of conflict generated by color-word Stroop stimuli (Experiment 3). The present findings extend our understanding of how mental imagery influences the allocation of attention.
Collapse
Affiliation(s)
- Ming-Ray Liao
- Department of Psychological and Brain Sciences, Texas A&M University, 4235 TAMU, College Station, TX, 77843-4235, USA.
| | - James D Grindell
- Department of Psychological and Brain Sciences, Texas A&M University, 4235 TAMU, College Station, TX, 77843-4235, USA
| | - Brian A Anderson
- Department of Psychological and Brain Sciences, Texas A&M University, 4235 TAMU, College Station, TX, 77843-4235, USA
| |
Collapse
|
32
|
Janik RA. Aesthetics and neural network image representations. Sci Rep 2023; 13:11428. [PMID: 37454170 DOI: 10.1038/s41598-023-38443-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 07/08/2023] [Indexed: 07/18/2023] Open
Abstract
We analyze the spaces of images encoded by generative neural networks of the BigGAN architecture. We find that generic multiplicative perturbations of neural network parameters away from the photo-realistic point often lead to networks generating images which appear as "artistic renditions" of the corresponding objects. This demonstrates an emergence of aesthetic properties directly from the structure of the photo-realistic visual environment as encoded in its neural network parametrization. Moreover, modifying a deep semantic part of the neural network leads to the appearance of symbolic visual representations. None of the considered networks had any access to images of human-made art.
Collapse
Affiliation(s)
- Romuald A Janik
- Institute of Theoretical Physics and Mark Kac Center for Complex Systems Research, Jagiellonian University, ul. Łojasiewicza 11, 30-348, Kraków, Poland.
| |
Collapse
|
33
|
Kwon S, Kim J, Kim T. Neuropsychological Activations and Networks While Performing Visual and Kinesthetic Motor Imagery. Brain Sci 2023; 13:983. [PMID: 37508915 PMCID: PMC10377687 DOI: 10.3390/brainsci13070983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/30/2023] Open
Abstract
This study aimed to answer the questions 'What are the neural networks and mechanisms involved in visual and kinesthetic motor imagery?', and 'Is part of cognitive processing included during visual and kinesthetic motor imagery?' by investigating the neurophysiological networks and activations during visual and kinesthetic motor imagery using motor imagery tasks (golf putting). The experiment was conducted with 19 healthy adults. Functional magnetic resonance imaging (fMRI) was used to examine neural activations and networks during visual and kinesthetic motor imagery using golf putting tasks. The findings of the analysis on cerebral activation patterns based on the two distinct types of motor imagery indicate that the posterior lobe, occipital lobe, and limbic lobe exhibited activation, and the right hemisphere was activated during the process of visual motor imagery. The activation of the temporal lobe and the parietal lobe were observed during the process of kinesthetic motor imagery. This study revealed that visual motor imagery elicited stronger activation in the right frontal lobe, whereas kinesthetic motor imagery resulted in greater activation in the left frontal lobe. It seems that kinesthetic motor imagery activates the primary somatosensory cortex (BA 2), the secondary somatosensory cortex (BA 5 and 7), and the temporal lobe areas and induces human sensibility. The present investigation evinced that the neural network and the regions of the brain that are activated exhibit variability contingent on the category of motor imagery.
Collapse
Affiliation(s)
- Sechang Kwon
- Department of Humanities & Arts, Korea Science Academy of KAIST, 105-47, Baegyanggwanmun-ro, Busanjin-gu, Busan 47162, Republic of Korea
- Global Institute for Talented Education, Korea Advanced Institute of Science and Technology (KAIST), 291, Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
| | - Jingu Kim
- Department of Physical Education, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
| | - Teri Kim
- Institute of Sports Science, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
| |
Collapse
|
34
|
Wilson H, Golbabaee M, Proulx MJ, Charles S, O'Neill E. EEG-based BCI Dataset of Semantic Concepts for Imagination and Perception Tasks. Sci Data 2023; 10:386. [PMID: 37322034 PMCID: PMC10272218 DOI: 10.1038/s41597-023-02287-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 06/02/2023] [Indexed: 06/17/2023] Open
Abstract
Electroencephalography (EEG) is a widely-used neuroimaging technique in Brain Computer Interfaces (BCIs) due to its non-invasive nature, accessibility and high temporal resolution. A range of input representations has been explored for BCIs. The same semantic meaning can be conveyed in different representations, such as visual (orthographic and pictorial) and auditory (spoken words). These stimuli representations can be either imagined or perceived by the BCI user. In particular, there is a scarcity of existing open source EEG datasets for imagined visual content, and to our knowledge there are no open source EEG datasets for semantics captured through multiple sensory modalities for both perceived and imagined content. Here we present an open source multisensory imagination and perception dataset, with twelve participants, acquired with a 124 EEG channel system. The aim is for the dataset to be open for purposes such as BCI related decoding and for better understanding the neural mechanisms behind perception, imagination and across the sensory modalities when the semantic category is held constant.
Collapse
Affiliation(s)
- Holly Wilson
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK.
| | - Mohammad Golbabaee
- Department of Engineering Mathematics, University of Bristol, Bristol, BS8 1TW, UK
| | | | - Stephen Charles
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK
| | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK.
| |
Collapse
|
35
|
Ramon C, Graichen U, Gargiulo P, Zanow F, Knösche TR, Haueisen J. Spatiotemporal phase slip patterns for visual evoked potentials, covert object naming tasks, and insight moments extracted from 256 channel EEG recordings. Front Integr Neurosci 2023; 17:1087976. [PMID: 37384237 PMCID: PMC10293627 DOI: 10.3389/fnint.2023.1087976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 05/19/2023] [Indexed: 06/30/2023] Open
Abstract
Phase slips arise from state transitions of the coordinated activity of cortical neurons which can be extracted from the EEG data. The phase slip rates (PSRs) were studied from the high-density (256 channel) EEG data, sampled at 16.384 kHz, of five adult subjects during covert visual object naming tasks. Artifact-free data from 29 trials were averaged for each subject. The analysis was performed to look for phase slips in the theta (4-7 Hz), alpha (7-12 Hz), beta (12-30 Hz), and low gamma (30-49 Hz) bands. The phase was calculated with the Hilbert transform, then unwrapped and detrended to look for phase slip rates in a 1.0 ms wide stepping window with a step size of 0.06 ms. The spatiotemporal plots of the PSRs were made by using a montage layout of 256 equidistant electrode positions. The spatiotemporal profiles of EEG and PSRs during the stimulus and the first second of the post-stimulus period were examined in detail to study the visual evoked potentials and different stages of visual object recognition in the visual, language, and memory areas. It was found that the activity areas of PSRs were different as compared with EEG activity areas during the stimulus and post-stimulus periods. Different stages of the insight moments during the covert object naming tasks were examined from PSRs and it was found to be about 512 ± 21 ms for the 'Eureka' moment. Overall, these results indicate that information about the cortical phase transitions can be derived from the measured EEG data and can be used in a complementary fashion to study the cognitive behavior of the brain.
Collapse
Affiliation(s)
- Ceon Ramon
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
- Regional Epilepsy Center, Harborview Medical Center, University of Washington, Seattle, WA, United States
| | - Uwe Graichen
- Department of Biostatistics and Data Science, Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| | - Paolo Gargiulo
- Institute of Biomedical and Neural Engineering, Reykjavik University, Reykjavik, Iceland
- Department of Science, Landspitali University Hospital, Reykjavik, Iceland
| | | | - Thomas R. Knösche
- Max Planck Institute for Human Cognitive and Neurosciences, Leipzig, Germany
| | - Jens Haueisen
- Institute of Biomedical Engineering and Informatics, Technische Universität Ilmenau, Ilmenau, Germany
| |
Collapse
|
36
|
Stephan-Otto C, Núñez C, Lombardini F, Cambra-Martí MR, Ochoa S, Senior C, Brébion G. Neurocognitive bases of self-monitoring of inner speech in hallucination prone individuals. Sci Rep 2023; 13:6251. [PMID: 37069194 PMCID: PMC10110610 DOI: 10.1038/s41598-023-32042-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 03/20/2023] [Indexed: 04/19/2023] Open
Abstract
Verbal hallucinations in schizophrenia patients might be seen as internal verbal productions mistaken for perceptions as a result of over-salient inner speech and/or defective self-monitoring processes. Similar cognitive mechanisms might underpin verbal hallucination proneness in the general population. We investigated, in a non-clinical sample, the cerebral activity associated with verbal hallucinatory predisposition during false recognition of familiar words -assumed to stem from poor monitoring of inner speech-vs. uncommon words. Thirty-seven healthy participants underwent a verbal recognition task. High- and low-frequency words were presented outside the scanner. In the scanner, the participants were then required to recognize the target words among equivalent distractors. Results showed that verbal hallucination proneness was associated with higher rates of false recognition of high-frequency words. It was further associated with activation of language and decisional brain areas during false recognitions of low-, but not high-, frequency words, and with activation of a recollective brain area during correct recognitions of low-, but not high-, frequency words. The increased tendency to report familiar words as targets, along with a lack of activation of the language, recollective, and decisional brain areas necessary for their judgement, suggests failure in the self-monitoring of inner speech in verbal hallucination-prone individuals.
Collapse
Affiliation(s)
- Christian Stephan-Otto
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
- Parc Sanitari Sant Joan de Déu, Sant Boi de Llobregat, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, Spain
| | - Christian Núñez
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
- Parc Sanitari Sant Joan de Déu, Sant Boi de Llobregat, Spain
| | | | | | - Susana Ochoa
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
- Parc Sanitari Sant Joan de Déu, Sant Boi de Llobregat, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, Spain
| | - Carl Senior
- School of Life & Health Sciences, Aston University, Birmingham, UK.
- University of Gibraltar, Gibraltar, UK.
| | - Gildas Brébion
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain.
- Parc Sanitari Sant Joan de Déu, Sant Boi de Llobregat, Spain.
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, Spain.
| |
Collapse
|
37
|
Dijkstra N, Fleming SM. Subjective signal strength distinguishes reality from imagination. Nat Commun 2023; 14:1627. [PMID: 36959279 PMCID: PMC10036541 DOI: 10.1038/s41467-023-37322-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 03/09/2023] [Indexed: 03/25/2023] Open
Abstract
Humans are voracious imaginers, with internal simulations supporting memory, planning and decision-making. Because the neural mechanisms supporting imagery overlap with those supporting perception, a foundational question is how reality and imagination are kept apart. One possibility is that the intention to imagine is used to identify and discount self-generated signals during imagery. Alternatively, because internally generated signals are generally weaker, sensory strength is used to index reality. Traditional psychology experiments struggle to investigate this issue as subjects can rapidly learn that real stimuli are in play. Here, we combined one-trial-per-participant psychophysics with computational modelling and neuroimaging to show that imagined and perceived signals are in fact intermixed, with judgments of reality being determined by whether this intermixed signal is strong enough to cross a reality threshold. A consequence of this account is that when virtual or imagined signals are strong enough, they become subjectively indistinguishable from reality.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Wellcome Centre for Human Neuroimaging, University College London, London, UK.
| | - Stephen M Fleming
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Max Planck UCL Centre for Computational Psychiatry and Aging Research, University College London, London, UK
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
38
|
Proverbio AM, Tacchini M, Jiang K. What do you have in mind? ERP markers of visual and auditory imagery. Brain Cogn 2023; 166:105954. [PMID: 36657242 DOI: 10.1016/j.bandc.2023.105954] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 01/06/2023] [Accepted: 01/07/2023] [Indexed: 01/19/2023]
Abstract
This study aimed to investigate the psychophysiological markers of imagery processes through EEG/ERP recordings. Visual and auditory stimuli representing 10 different semantic categories were shown to 30 healthy participants. After a given interval and prompted by a light signal, participants were asked to activate a mental image corresponding to the semantic category for recording synchronized electrical potentials. Unprecedented electrophysiological markers of imagination were recorded in the absence of sensory stimulation. The following peaks were identified at specific scalp sites and latencies, during imagination of infants (centroparietal positivity, CPP, and late CPP), human faces (anterior negativity, AN), animals (anterior positivity, AP), music (P300-like), speech (N400-like), affective vocalizations (P2-like) and sensory (visual vs auditory) modality (PN300). Overall, perception and imagery conditions shared some common electro/cortical markers, but during imagery the category-dependent modulation of ERPs was long latency and more anterior, with respect to the perceptual condition. These ERP markers might be precious tools for BCI systems (pattern recognition, classification, or A.I. algorithms) applied to patients affected by consciousness disorders (e.g., in a vegetative or comatose state) or locked-in-patients (e.g., spinal or SLA patients).
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano-Bicocca, Italy.
| | - Marta Tacchini
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano-Bicocca, Italy
| | - Kaijun Jiang
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano-Bicocca, Italy; Department of Psychology, University of Jyväskylä, Finland
| |
Collapse
|
39
|
Aseyev N. Perception of color in primates: A conceptual color neurons hypothesis. Biosystems 2023; 225:104867. [PMID: 36792004 DOI: 10.1016/j.biosystems.2023.104867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 02/12/2023] [Accepted: 02/12/2023] [Indexed: 02/16/2023]
Abstract
Perception of color by humans and other primates is a complex problem, studied by neurophysiology, psychophysiology, psycholinguistics, and even philosophy. Being mostly trichromats, simian primates have three types of opsin proteins, expressed in cone neurons in the eye, which allow for the sensing of color as the physical wavelength of light. Further, in neural networks of the retina, the coding principle changes from three types of sensor proteins to two opponent channels: activity of one type of neuron encode the evolutionarily ancient blue-yellow axis of color stimuli, and another more recent evolutionary channel, encoding the axis of red-green color stimuli. Both color channels are distinctive in neural organization at all levels from the eye to the neocortex, where it is thought that the perception of color (as philosophical qualia) emerges from the activity of some neuron ensembles. Here, using data from neurophysiology as a starting point, we propose a hypothesis on how the perception of color can be encoded in the activity of certain neurons in the neocortex. These conceptual neurons, herein referred to as 'color neurons', code only the hue of the color of visual stimulus, similar to place cells and number neurons, already described in primate brains. A case study with preliminary, but direct, evidence for existing conceptual color neurons in the human brain was published in 2008. We predict that the upcoming studies in non-human primates will be more extensive and provide a more detailed description of conceptual color neurons.
Collapse
Affiliation(s)
- Nikolay Aseyev
- Institute Higher Nervous Activity and Neurophysiology, RAS, Moscow, 117485, Butlerova, 5A, Russian Federation.
| |
Collapse
|
40
|
Blazhenkova O, Kanero J, Duman I, Umitli O. Read and Imagine: Visual Imagery Experience Evoked by First versus Second Language. Psychol Rep 2023:332941231158059. [PMID: 36799268 DOI: 10.1177/00332941231158059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
Abstract
This research examined visual imagery evoked during reading in relation to language. Following the previous reports that bilinguals experience less vivid imagery in their second language (L2) than first language (L1), we studied how visual imagery is affected by the language in use, characteristics of text, and readers' individual differences. In L1 and L2, 382 bilinguals read object texts describing pictorial properties of objects such as color and shape, spatial texts describing spatial properties such as spatial relations and locations, and excerpts from novels. They rated imagery vividness after each segment and the whole text, and rated the specific imagery characteristics (e.g., color, spatial relations). Regardless of the types of text or the timing of rating, the vividness of imagery was higher in L1 than in L2. However, English proficiency also predicted vividness in L2. Further, vividness in the object and spatial trials were predicted by the individual's object and spatial imagery skills. The effect of language on imagery depends on the text nature and difficulty, when and how vividness is measured, and individual differences.
Collapse
Affiliation(s)
- Olesya Blazhenkova
- Faculty of Arts and Social Sciences, 52991Sabanci University, Istanbul, Türkiye
| | - Junko Kanero
- Faculty of Arts and Social Sciences, 52991Sabanci University, Istanbul, Türkiye
| | - Irem Duman
- Faculty of Arts and Social Sciences, 52991Sabanci University, Istanbul, Türkiye
| | - Ozgenur Umitli
- Faculty of Arts and Social Sciences, 52991Sabanci University, Istanbul, Türkiye
| |
Collapse
|
41
|
Zhou L, Wu B, Deng Y, Liu M. Brain activation and individual differences of emotional perception and imagery in healthy adults: A functional near-infrared spectroscopy (fNIRS) study. Neurosci Lett 2023; 797:137072. [PMID: 36642240 DOI: 10.1016/j.neulet.2023.137072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 10/28/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
The current study investigated the brain activation and individual differences in perception and imagery of sad pictures versus happy and neutral pictures. Sixty-eight healthy adults were instructed to view and visualize sad, happy, and neutral pictures during 64-channel functional near-infrared spectroscopy (fNIRS) recording. The results indicated that emotional perception evoked increased occipital activation, while emotional imagery involved increased activation in the bilateral prefrontal and parietal cortex. Sad pictures evoked decreased brain activation in the occipital and prefrontal cortex than happy and neutral pictures. For women, imagery activation was greater than perception activation in the right parietal cortex. Additionally, participants' self-rated imagery vividness was positively correlated with the occipital activation during happy imagery and trait rumination was negatively correlated with the occipital activation during perception. The findings suggest that emotional perception may involve the bottom-up sensory input, while emotional imagery may involve the top-down cognitive processes. Healthy individuals engage decreased cognitive resources for sad perception and imagery. Moreover, our observation could provide useful information to establish fNIRS assessment as an objective tool to monitor the emotional status on an individual trait basis.
Collapse
Affiliation(s)
- Li Zhou
- Department of Psychology, Jiangxi Normal University, Nanchang 330022, China; Center of Mental Health Education and Research, Jiangxi Normal University, Nanchang 330022, China
| | - Biyun Wu
- Department of Psychology, Jiangxi Normal University, Nanchang 330022, China; Center of Mental Health Education and Research, Jiangxi Normal University, Nanchang 330022, China
| | - Yuanyuan Deng
- Department of Psychology, Jiangxi Normal University, Nanchang 330022, China; Center of Mental Health Education and Research, Jiangxi Normal University, Nanchang 330022, China
| | - Mingfan Liu
- Department of Psychology, Jiangxi Normal University, Nanchang 330022, China; Center of Mental Health Education and Research, Jiangxi Normal University, Nanchang 330022, China.
| |
Collapse
|
42
|
Comparative effects of hypnotic suggestion and imagery instruction on bodily awareness. Conscious Cogn 2023; 108:103473. [PMID: 36706563 DOI: 10.1016/j.concog.2023.103473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 10/31/2022] [Accepted: 01/12/2023] [Indexed: 01/26/2023]
Abstract
Bodily awareness is informed by both sensory data and prior knowledge. Although misleading sensory signals have been repeatedly shown to affect bodily awareness, only scant attention has been given to the influence of cognitive variables. Hypnotic suggestion has recently been shown to impact visuospatial and sensorimotor representations of body-part size although the mechanisms subserving this effect are yet to be identified. Mental imagery might play a causal or facilitative role in this effect, as it has been shown to influence body awareness in previous studies. Nonetheless, current views ascribe only an epiphenomenal role to imagery in the implementation of hypnotic suggestions. This study compared the effects of hypnotic suggestion and imagery instruction for influencing the visuospatial and sensorimotor aspects of body-size representation. Both experimental manipulations produced significant increases (elongation) in both representations compared to baseline, although the effects were larger in the hypnotic suggestion condition. The effects of both manipulations were highly correlated across participants, suggesting overlapping mechanisms. Self-reports suggested that the use of voluntary imagery did not significantly contribute to the efficacy of either manipulation. Rather, top-down effects on body representations seem to be partly driven by response expectancies, spontaneous imagery, and hypnotic suggestibility in both conditions. These results are in line with current theories of suggestion and raise fundamental questions regarding the mechanisms driving the influence of cognition on body representations.
Collapse
|
43
|
Blank H, Alink A, Büchel C. Multivariate functional neuroimaging analyses reveal that strength-dependent face expectations are represented in higher-level face-identity areas. Commun Biol 2023; 6:135. [PMID: 36725984 PMCID: PMC9892564 DOI: 10.1038/s42003-023-04508-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 01/19/2023] [Indexed: 02/03/2023] Open
Abstract
Perception is an active inference in which prior expectations are combined with sensory input. It is still unclear how the strength of prior expectations is represented in the human brain. The strength, or precision, of a prior could be represented with its content, potentially in higher-level sensory areas. We used multivariate analyses of functional resonance imaging data to test whether expectation strength is represented together with the expected face in high-level face-sensitive regions. Participants were trained to associate images of scenes with subsequently presented images of different faces. Each scene predicted three faces, each with either low, intermediate, or high probability. We found that anticipation enhances the similarity of response patterns in the face-sensitive anterior temporal lobe to response patterns specifically associated with the image of the expected face. In contrast, during face presentation, activity increased for unexpected faces in a typical prediction error network, containing areas such as the caudate and the insula. Our findings show that strength-dependent face expectations are represented in higher-level face-identity areas, supporting hierarchical theories of predictive processing according to which higher-level sensory regions represent weighted priors.
Collapse
Affiliation(s)
- Helen Blank
- grid.13648.380000 0001 2180 3484Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Arjen Alink
- grid.13648.380000 0001 2180 3484Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Christian Büchel
- grid.13648.380000 0001 2180 3484Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| |
Collapse
|
44
|
W D, C P, C ML, F L. Imagining and reading actions: Towards similar motor representations. Heliyon 2023; 9:e13426. [PMID: 36816230 PMCID: PMC9932708 DOI: 10.1016/j.heliyon.2023.e13426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 12/27/2022] [Accepted: 01/30/2023] [Indexed: 02/04/2023] Open
Abstract
While action language and motor imagery both engage the motor system, determining whether these two processes indeed share the same motor representations would contribute to better understanding their underlying mechanisms. We conducted two experiments probing the mutual influence of these two processes. In Exp.1, hand-action verbs were presented subliminally, and participants (n = 36) selected the verb they thought they perceived from two alternatives. When congruent actions were imagined prior to this task, accuracy significantly increased, i.e. participants were better able to "see" the subliminal verbs. In Exp.2, participants (n = 19) imagined hand flexion or extension, while corticospinal excitability was measured via transcranial magnetic stimulation. Corticospinal excitability was modulated by action verbs subliminally presented prior to imagery. Specifically, the typical increase observed during imagery was suppressed after presentation of incongruent action verbs. This mutual influence of action language and motor imagery, both at behavioral and neurophysiological levels, suggests overlapping motor representations.
Collapse
Affiliation(s)
- Dupont W
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, F-21000, Dijon, France
| | - Papaxanthis C
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, F-21000, Dijon, France
| | - Madden-Lombardi C
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, F-21000, Dijon, France
- Centre National de la Recherche Scientifique (CNRS), France
| | - Lebon F
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, F-21000, Dijon, France
- Institut Universitaire de France (IUF), Paris, France
| |
Collapse
|
45
|
Bazgir B, Shamseddini A, Hogg JA, Ghadiri F, Bahmani M, Diekfuss JA. Is cognitive control of perception and action via attentional focus moderated by motor imagery? BMC Psychol 2023; 11:12. [PMID: 36647147 PMCID: PMC9841651 DOI: 10.1186/s40359-023-01047-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 01/10/2023] [Indexed: 01/18/2023] Open
Abstract
Motor imagery (MI) has emerged as an individual factor that may modulate the effects of attentional focus on motor skill performance. In this study, we investigated whether global MI, as well as its components (i.e., kinesthetic MI, internal visual MI, and external visual MI) moderate the effect of attentional focus on performance in a group of ninety-two young adult novice air-pistol shooters (age: M = 21.87, SD = 2.54). After completing the movement imagery questionnaire-3 (MIQ-3), participants were asked to complete a pistol shooting experiment in three different attentional focus conditions: (1) No focus instruction condition (control condition with no verbal instruction) (2) an internal focus instruction condition, and (3) an external focus condition. Shot accuracy, performance time, and aiming trace speed (i.e., stability of hold or weapon stability) were measured as the performance variables. Results revealed that shot accuracy was significantly poorer during internal relative to control focus condition. In addition, performance time was significantly higher during external relative to both control and internal condition. However, neither global MI, nor its subscales, moderated the effects of attentional focus on performance. This study supports the importance of attentional focus for perceptual and motor performance, yet global MI and its modalities/perspectives did not moderate pistol shooting performance. This study suggests that perception and action are cognitively controlled by attentional mechanisms, but not motor imagery. Future research with complementary assessment modalities is warranted to extend the present findings.
Collapse
Affiliation(s)
- Behzad Bazgir
- grid.411521.20000 0000 9975 294XExercise Physiology Research Center, Life Style Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Alireza Shamseddini
- grid.411521.20000 0000 9975 294XExercise Physiology Research Center, Life Style Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Jennifer A. Hogg
- grid.267303.30000 0000 9338 1949Department of Health and Human Performance, The University of Tennessee Chattanooga, Chattanooga, TN USA
| | - Farhad Ghadiri
- grid.412265.60000 0004 0406 5813Department of Motor Behavior, Kharazmi University, Tehran, Iran
| | - Moslem Bahmani
- grid.411521.20000 0000 9975 294XExercise Physiology Research Center, Life Style Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran ,grid.412265.60000 0004 0406 5813Department of Motor Behavior, Kharazmi University, Tehran, Iran
| | - Jed A. Diekfuss
- Emory Sports Performance And Research Center (SPARC), Flowery Branch, GA USA ,grid.462222.20000 0004 0382 6932Emory Sports Medicine Center, Atlanta, GA USA ,grid.189967.80000 0001 0941 6502Department of Orthopaedics, Emory University School of Medicine, Atlanta, GA USA
| |
Collapse
|
46
|
Ritz L, Segobin S, Laniepce A, Lannuzel C, Boudehent C, Vabret F, Urso L, Pitel AL, Beaunieux H. Structural brain substrates of the deficits observed on the BEARNI test in alcohol use disorder and Korsakoff's syndrome. J Neurosci Res 2023; 101:130-142. [PMID: 36200527 DOI: 10.1002/jnr.25132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 09/15/2022] [Accepted: 09/22/2022] [Indexed: 11/10/2022]
Abstract
Chronic and excessive alcohol consumption can result in alcohol use disorder (AUD) without neurological complications and in Korsakoff's syndrome (KS) when combined with thiamine deficiency. These two clinical forms are accompanied by widespread structural brain damage in both the fronto-cerebellar (FCC) and Papez circuits (PC) as well as in the parietal cortex, resulting in cognitive and motor deficits. BEARNI is a screening tool especially designed to detect neuropsychological impairments in AUD. However, the sensitivity of this tool to the structural brain damage of AUD and KS patients remains unknown. Eighteen KS patients, 47 AUD patients and 27 healthy controls (HC) underwent the BEARNI test and a 3 T-MRI examination. Multiple regression analyses conducted between GM density and performance on each BEARNI subtest revealed correlations with regions included in the FCC, PC, thalamus and posterior cortex (precuneus and calcarine regions). All these brain regions were altered in KS compared to HC, in agreement with the cognitive deficits observed in the corresponding BEARNI subtests. The comparison between KS and AUD regarding the GM density in the several nodes of the FCC and calcarine regions revealed that they were atrophied to the same extent, suggesting that BEARNI is sensitive to the severity of alcohol-related GM abnormalities. Within the PC, the density of the cingulate cortex and thalamus, which correlated with the memory and fluency subscores, was smaller in KS than in AUD, suggesting that BEARNI is sensitive to specific brain abnormalities occurring in KS.
Collapse
Affiliation(s)
- Ludivine Ritz
- Laboratoire de Psychologie Caen Normandie (LPCN, EA 7452), Pôle Santé, Maladies, Handicaps - MRSH (USR 3486, CNRS-UNICAEN), Normandie Université, UNICAEN, Caen, France
| | - Shailendra Segobin
- EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, PSL Research University, Normandie Université, Caen, France
| | - Alice Laniepce
- EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, PSL Research University, Normandie Université, Caen, France
| | - Coralie Lannuzel
- EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, PSL Research University, Normandie Université, Caen, France.,Service d'Addictologie, Centre Hospitalier Universitaire de Caen, Caen, France
| | - Céline Boudehent
- EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, PSL Research University, Normandie Université, Caen, France.,Service d'Addictologie, Centre Hospitalier Universitaire de Caen, Caen, France
| | - François Vabret
- EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, PSL Research University, Normandie Université, Caen, France.,Service d'Addictologie, Centre Hospitalier Universitaire de Caen, Caen, France
| | - Laurent Urso
- Service d'Addictologie, Centre Hospitalier Roubaix, Roubaix, France
| | - Anne Lise Pitel
- EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, PSL Research University, Normandie Université, Caen, France.,INSERM, PhIND "Physiopathology and Imaging of Neurological Disorders", Institut Blood and Brain @ Caen-Normandie, Cyceron, Normandie Université, UNICAEN, Caen, France.,Institut Universitaire de France (IUF), Paris, France
| | - Hélène Beaunieux
- Laboratoire de Psychologie Caen Normandie (LPCN, EA 7452), Pôle Santé, Maladies, Handicaps - MRSH (USR 3486, CNRS-UNICAEN), Normandie Université, UNICAEN, Caen, France
| |
Collapse
|
47
|
Corriveau A, Kidder A, Teichmann L, Wardle SG, Baker CI. Sustained neural representations of personally familiar people and places during cued recall. Cortex 2023; 158:71-82. [PMID: 36459788 PMCID: PMC9840701 DOI: 10.1016/j.cortex.2022.08.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/28/2022] [Accepted: 08/29/2022] [Indexed: 01/18/2023]
Abstract
The recall and visualization of people and places from memory is an everyday occurrence, yet the neural mechanisms underpinning this phenomenon are not well understood. In particular, the temporal characteristics of the internal representations generated by active recall are unclear. Here, we used magnetoencephalography (MEG) and multivariate pattern analysis to measure the evolving neural representation of familiar places and people across the whole brain when human participants engage in active recall. To isolate self-generated imagined representations, we used a retro-cue paradigm in which participants were first presented with two possible labels before being cued to recall either the first or second item. We collected personalized labels for specific locations and people familiar to each participant. Importantly, no visual stimuli were presented during the recall period, and the retro-cue paradigm allowed the dissociation of responses associated with the labels from those corresponding to the self-generated representations. First, we found that following the retro-cue it took on average ∼1000 ms for distinct neural representations of freely recalled people or places to develop. Second, we found distinct representations of personally familiar concepts throughout the 4 s recall period. Finally, we found that these representations were highly stable and generalizable across time. These results suggest that self-generated visualizations and recall of familiar places and people are subserved by a stable neural mechanism that operates relatively slowly when under conscious control.
Collapse
Affiliation(s)
- Anna Corriveau
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA; Department of Psychology, The University of Chicago, Chicago, IL 60637, USA.
| | - Alexis Kidder
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | - Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| | - Susan G Wardle
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| |
Collapse
|
48
|
Mental imagery of nature induces positive psychological effects. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-04088-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Abstract
Exposure to natural environments promotes positive psychological effects. Experimental studies on this issue typically have not been able to distinguish the contributions of top-down processes from stimulus-driven bottom-up processing. We tested in an online study whether mental imagery (top-down processing) of restorative natural environments would produce positive psychological effects, as compared with restorative built and non-restorative urban environments. The participants (n = 70) from two countries (Finland and Norway) imagined being present in different environments for 30 s, after which they rated their subjective experiences relating to vividness of imagery, relaxation, emotional arousal, valence (positivity vs. negativity) of emotions, and mental effort. In addition, a psychometric scale measuring vividness of imagination, a scale measuring nature connectedness, and a questionnaire measuring preference of the imagined environments were filled-in. Imagery of natural environments elicited stronger positive emotional valence and more relaxation than imagery of built and urban environments. Nature connectedness and preference moderated these effects, but they did not fully explain the affective benefits of nature. Scores in a psychometric imagery scale were associated in consistent way to the subjective ratings in the imagery task, suggesting that the participants performed attentively and honestly in reporting their subjective experiences. We conclude that top-down factors play a key role in the psychological effects of nature. A practical implication of the findings is that inclusion of natural elements in imagery-based interventions may help to increasing positive affective states.
Collapse
|
49
|
Evidence for predictions established by phantom sound. Neuroimage 2022; 264:119766. [PMID: 36435344 DOI: 10.1016/j.neuroimage.2022.119766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 08/24/2022] [Accepted: 11/22/2022] [Indexed: 11/24/2022] Open
Abstract
Predictions, the bridge between the internal and external worlds, are established by prior experience and updated by sensory stimuli. Responses to omitted but unexpected stimuli, known as omission responses, can break the one-to-one mapping of stimulus-response and can expose predictions established by the preceding stimulus built up. While research into exogenous predictions (driven by external stimuli) is often reported, that into endogenous predictions (driven by internal percepts) is rarely available in the literature. Here, we report evidence for endogenous predictions established by the Zwicker tone illusion, a phantom pure-tone-like auditory percept following notch noises. We found that MMN, P300, and theta oscillations could be recorded using an omission paradigm in subjects who can perceive Zwicker tone illusions, but could not in those who cannot. The MMN and P300 responses relied on attention, but theta oscillations did not. In-depth analysis shows that an increase in single-trial theta power, including total and induced theta, with the endogenous prediction, is lateralized to the left frontal brain areas. Our study depicts that the brain automatically analyzes internal perception, progressively establishes predictions and yields prediction errors in the left frontal region when a violation occurs.
Collapse
|
50
|
Park HD, Piton T, Kannape OA, Duncan NW, Lee KY, Lane TJ, Blanke O. Breathing is coupled with voluntary initiation of mental imagery. Neuroimage 2022; 264:119685. [PMID: 36252914 DOI: 10.1016/j.neuroimage.2022.119685] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 10/03/2022] [Accepted: 10/13/2022] [Indexed: 11/09/2022] Open
Abstract
Previous research has suggested that bodily signals from internal organs are associated with diverse cortical and subcortical processes involved in sensory-motor functions, beyond homeostatic reflexes. For instance, a recent study demonstrated that the preparation and execution of voluntary actions, as well as its underlying neural activity, are coupled with the breathing cycle. In the current study, we investigated whether such breathing-action coupling is limited to voluntary motor action or whether it is also present for mental actions not involving any overt bodily movement. To answer this question, we recorded electroencephalography (EEG), electromyography (EMG), and respiratory signals while participants were conducting a voluntary action paradigm including self-initiated motor execution (ME), motor imagery (MI), and visual imagery (VI) tasks. We observed that the voluntary initiation of ME, MI, and VI are similarly coupled with the respiration phase. In addition, EEG analysis revealed the existence of readiness potential (RP) waveforms in all three tasks (i.e., ME, MI, VI), as well as a coupling between the RP amplitude and the respiratory phase. Our findings show that the voluntary initiation of both imagined and overt action is coupled with respiration, and further suggest that the breathing system is involved in preparatory processes of voluntary action by contributing to the temporal decision of when to initiate the action plan, regardless of whether this culminates in overt movements.
Collapse
Affiliation(s)
- Hyeong-Dong Park
- Graduate Institute of Mind, Brain and Consciousness, Taipei Medical University, Taipei, Taiwan; Brain and Consciousness Research Centre, Shuang-Ho Hospital, New Taipei City, Taiwan.
| | - Timothy Piton
- Laboratory of Cognitive Neuroscience, Neuro-X Institute and Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Oliver A Kannape
- Laboratory of Cognitive Neuroscience, Neuro-X Institute and Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Niall W Duncan
- Graduate Institute of Mind, Brain and Consciousness, Taipei Medical University, Taipei, Taiwan; Brain and Consciousness Research Centre, Shuang-Ho Hospital, New Taipei City, Taiwan
| | - Kang-Yun Lee
- Division of Pulmonary Medicine, Department of Internal Medicine, Shuang Ho Hospital, Taipei Medical University, New Taipei City, Taiwan
| | - Timothy J Lane
- Graduate Institute of Mind, Brain and Consciousness, Taipei Medical University, Taipei, Taiwan; Brain and Consciousness Research Centre, Shuang-Ho Hospital, New Taipei City, Taiwan; Institute of European and American Studies, Academia Sinica, Taipei, Taiwan
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Neuro-X Institute and Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland; Department of Clinical Neurosciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|