1
|
Delhaye E, Besson G, Bahri MA, Bastin C. Object fine-grained discrimination as a sensitive cognitive marker of transentorhinal integrity. Commun Biol 2025; 8:800. [PMID: 40415135 DOI: 10.1038/s42003-025-08201-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Accepted: 05/08/2025] [Indexed: 05/27/2025] Open
Abstract
The transentorhinal cortex (tErC) is one of the first regions affected by Alzheimer's disease (AD), often showing changes before clinical symptoms appear. Understanding its role in cognition is key to detecting early cognitive impairments in AD. This study tested the hypothesis that the tErC supports fine-grained representations of unique individual objects, sensitively to the granularity of the demanded discrimination, influencing both perceptual and mnemonic functions. We examined the tErC's role in object versus scene discrimination, using objective (based on a pretrained convolutional neural network, CNN) and subjective (human-rated) measures of visual similarity. Our results show that the structural integrity of the tErC is specifically related to the sensitivity to visual similarity for objects, but not for scenes. Importantly, this relationship depends on how visual similarity is measured: it appears only when using CNN visual similarity measures in perceptual discrimination, and solely when using subjective similarity ratings in mnemonic discrimination. Furthermore, in mnemonic discrimination, object sensitivity to visual similarity was specifically associated with the integrity of tErC-BA36 connectivity, only when similarity was computed from subjective ratings. Altogether, these findings suggest that discrimination sensitivity to object visual similarity may represent a specific marker of tErC integrity after accounting for the type of similarity measured.
Collapse
Affiliation(s)
- Emma Delhaye
- GIGA Research, CRC Human Imaging, University of Liège, Liège, Belgium.
- PsyNCog Research Unit, Faculty of Psychology, University of Liège, Liège, Belgium.
- CICPSI, Faculty of Psychology, University of Lisbon, Lisbon, Portugal.
| | - Gabriel Besson
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Mohamed Ali Bahri
- GIGA Research, CRC Human Imaging, University of Liège, Liège, Belgium
| | - Christine Bastin
- GIGA Research, CRC Human Imaging, University of Liège, Liège, Belgium
- PsyNCog Research Unit, Faculty of Psychology, University of Liège, Liège, Belgium
| |
Collapse
|
2
|
Choi AJ, Hefley BS, Strobel HA, Moss SM, Hoying JB, Nicholas SE, Moshayedi S, Kim J, Karamichos D. Fabrication of a 3D Corneal Model Using Collagen Bioink and Human Corneal Stromal Cells. J Funct Biomater 2025; 16:118. [PMID: 40278226 PMCID: PMC12028034 DOI: 10.3390/jfb16040118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2025] [Revised: 03/14/2025] [Accepted: 03/24/2025] [Indexed: 04/26/2025] Open
Abstract
Corneal transplantation remains a critical treatment option for individuals with corneal disorders, but it faces challenges such as rejection, high associated medical costs, and donor scarcity. A promising alternative for corneal replacement involves fabricating artificial cornea from a patient's own cells. Our study aimed to leverage bioprinting to develop a corneal model using human corneal stromal cells embedded in a collagen-based bioink. We generated both cellular and acellular collagen I (COL I) constructs. Cellular constructs were cultured for up to 4 weeks, and gene expression analysis was performed to assess extracellular matrix (ECM) remodeling and fibrotic markers. Our results demonstrated a significant decrease in the expression of COL I, collagen III (COL III), vimentin (VIM), and vinculin (VCL), indicating a dynamic remodeling process towards a more physiologically relevant corneal ECM. Overall, our study provides a foundational framework for developing customizable, corneal replacements using bioprinting technology. Further research is necessary to optimize the bioink composition and evaluate the functional and biomechanical properties of these bioengineered corneas.
Collapse
Affiliation(s)
- Alexander J. Choi
- North Texas Eye Research Institute, University of North Texas Health Science Center, 3430 Camp Bowie Blvd, Fort Worth, TX 76107, USA; (A.J.C.); (B.S.H.); (S.E.N.); (S.M.); (J.K.)
- Department of Pharmaceutical Sciences, University of North Texas Health Science Center, 3500 Camp Bowie Blvd, Fort Worth, TX 76107, USA
| | - Brenna S. Hefley
- North Texas Eye Research Institute, University of North Texas Health Science Center, 3430 Camp Bowie Blvd, Fort Worth, TX 76107, USA; (A.J.C.); (B.S.H.); (S.E.N.); (S.M.); (J.K.)
- Department of Pharmaceutical Sciences, University of North Texas Health Science Center, 3500 Camp Bowie Blvd, Fort Worth, TX 76107, USA
| | - Hannah A. Strobel
- Advanced Solutions Life Sciences, 500 N Commercial St., Manchester, NH 03101, USA; (H.A.S.); (S.M.M.); (J.B.H.)
| | - Sarah M. Moss
- Advanced Solutions Life Sciences, 500 N Commercial St., Manchester, NH 03101, USA; (H.A.S.); (S.M.M.); (J.B.H.)
| | - James B. Hoying
- Advanced Solutions Life Sciences, 500 N Commercial St., Manchester, NH 03101, USA; (H.A.S.); (S.M.M.); (J.B.H.)
| | - Sarah E. Nicholas
- North Texas Eye Research Institute, University of North Texas Health Science Center, 3430 Camp Bowie Blvd, Fort Worth, TX 76107, USA; (A.J.C.); (B.S.H.); (S.E.N.); (S.M.); (J.K.)
- Department of Pharmaceutical Sciences, University of North Texas Health Science Center, 3500 Camp Bowie Blvd, Fort Worth, TX 76107, USA
| | - Shadi Moshayedi
- North Texas Eye Research Institute, University of North Texas Health Science Center, 3430 Camp Bowie Blvd, Fort Worth, TX 76107, USA; (A.J.C.); (B.S.H.); (S.E.N.); (S.M.); (J.K.)
- Department of Pharmaceutical Sciences, University of North Texas Health Science Center, 3500 Camp Bowie Blvd, Fort Worth, TX 76107, USA
| | - Jayoung Kim
- North Texas Eye Research Institute, University of North Texas Health Science Center, 3430 Camp Bowie Blvd, Fort Worth, TX 76107, USA; (A.J.C.); (B.S.H.); (S.E.N.); (S.M.); (J.K.)
- Department of Pharmaceutical Sciences, University of North Texas Health Science Center, 3500 Camp Bowie Blvd, Fort Worth, TX 76107, USA
| | - Dimitrios Karamichos
- North Texas Eye Research Institute, University of North Texas Health Science Center, 3430 Camp Bowie Blvd, Fort Worth, TX 76107, USA; (A.J.C.); (B.S.H.); (S.E.N.); (S.M.); (J.K.)
- Department of Pharmaceutical Sciences, University of North Texas Health Science Center, 3500 Camp Bowie Blvd, Fort Worth, TX 76107, USA
- Department of Pharmacology and Neuroscience, University of North Texas Health Science Center, 3500 Camp Bowie Blvd, Fort Worth, TX 76107, USA
| |
Collapse
|
3
|
Badwal MW, Bergmann J, Roth JHR, Doeller CF, Hebart MN. The Scope and Limits of Fine-Grained Image and Category Information in the Ventral Visual Pathway. J Neurosci 2025; 45:e0936242024. [PMID: 39505406 PMCID: PMC11735656 DOI: 10.1523/jneurosci.0936-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 09/15/2024] [Accepted: 09/20/2024] [Indexed: 11/08/2024] Open
Abstract
Humans can easily abstract incoming visual information into discrete semantic categories. Previous research employing functional MRI (fMRI) in humans has identified cortical organizing principles that allow not only for coarse-scale distinctions such as animate versus inanimate objects but also more fine-grained distinctions at the level of individual objects. This suggests that fMRI carries rather fine-grained information about individual objects. However, most previous work investigating fine-grained category representations either additionally included coarse-scale category comparisons of objects, which confounds fine-grained and coarse-scale distinctions, or only used a single exemplar of each object, which confounds visual and semantic information. To address these challenges, here we used multisession human fMRI (female and male) paired with a broad yet homogenous stimulus class of 48 terrestrial mammals, with two exemplars per mammal. Multivariate decoding and representational similarity analysis revealed high image-specific reliability in low- and high-level visual regions, indicating stable representational patterns at the image level. In contrast, analyses across exemplars of the same animal yielded only small effects in the lateral occipital complex (LOC), indicating rather subtle category effects in this region. Variance partitioning with a deep neural network and shape model showed that across-exemplar effects in the early visual cortex were largely explained by low-level visual appearance, while representations in LOC appeared to also contain higher category-specific information. These results suggest that representations typically measured with fMRI are dominated by image-specific visual or coarse-grained category information but indicate that commonly employed fMRI protocols may reveal subtle yet reliable distinctions between individual objects.
Collapse
Affiliation(s)
- Markus W Badwal
- Department of Psychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Vision & Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Department of Neurosurgery, University of Leipzig Medical Center, Leipzig 04103, Germany
| | - Johanna Bergmann
- Department of Psychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Johannes H R Roth
- Vision & Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Department of Medicine, Justus Liebig University, Giessen 35390 Germany
| | - Christian F Doeller
- Department of Psychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim 7030, Norway
| | - Martin N Hebart
- Vision & Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Department of Medicine, Justus Liebig University, Giessen 35390 Germany
- Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt, Marburg 35032, Germany
| |
Collapse
|
4
|
Greene MR, Rohan AM. The brain prioritizes the basic level of object category abstraction. Sci Rep 2025; 15:31. [PMID: 39747114 PMCID: PMC11695711 DOI: 10.1038/s41598-024-80546-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 11/19/2024] [Indexed: 01/04/2025] Open
Abstract
The same object can be described at multiple levels of abstraction ("parka", "coat", "clothing"), yet human observers consistently name objects at a mid-level of specificity known as the basic level. Little is known about the temporal dynamics involved in retrieving neural representations that prioritize the basic level, nor how these dynamics change with evolving task demands. In this study, observers viewed 1080 objects arranged in a three-tier category taxonomy while 64-channel EEG was recorded. Observers performed a categorical one-back task in different recording sessions on the basic or subordinate levels. We used time-resolved multiple regression to assess the utility of superordinate-, basic-, and subordinate-level categories across the scalp. We found robust use of basic-level category information starting at about 50 ms after stimulus onset and moving from posterior electrodes (149 ms) through lateral (261 ms) to anterior sites (332 ms). Task differences were not evident in the first 200 ms of processing but were observed between 200-300 ms after stimulus presentation. Together, this work demonstrates that the object category representations prioritize the basic level and do so relatively early, congruent with results that show that basic-level categorization is an automatic and obligatory process.
Collapse
Affiliation(s)
- Michelle R Greene
- Bates College Program in Neuroscience, Bates College, Lewiston, ME, USA.
- Department of Psychology, Barnard College, Columbia University, 3009 Broadway, New York, NY 10027, USA.
| | - Alyssa Magill Rohan
- Bates College Program in Neuroscience, Bates College, Lewiston, ME, USA
- Boston Children's Hospital, Boston, USA
| |
Collapse
|
5
|
Torres RE, Duprey MS, Campbell KL, Emrich SM. Not all objects are created equal: The object benefit in visual working memory is supported by greater recollection-like memory, but only for memorable objects. Mem Cognit 2024:10.3758/s13421-024-01655-z. [PMID: 39467965 DOI: 10.3758/s13421-024-01655-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/10/2024] [Indexed: 10/30/2024]
Abstract
Visual working memory is thought to have a fixed capacity limit. However, recent evidence suggests that a greater number of real-world objects than simple features (i.e., colors) can be maintained, an effect termed the object benefit. Here, we examined whether this object benefit in visual working memory is due to qualitatively different memory processes employed for meaningful stimuli compared to simple features. In online samples of young adults, real-world objects were better remembered than colors, had higher measures of recollection, and showed a greater proportion of high-confidence responses (Exp. 1). Objects were also remembered better than their scrambled counterparts (Exp. 2), suggesting that this benefit is related to semantic information, rather than visual complexity. Critically, the specific objects that were likely to be remembered with high confidence were highly correlated across experiments, consistent with the idea that some objects are more memorable than others. Visual working memory performance for the least-memorable objects was worse than that of colors and scrambled objects. These findings suggest that real-world objects give rise to recollective, or at least high-confidence, responses at retrieval that may depend on activation of semantic features, but that this effect is limited to certain objects.
Collapse
Affiliation(s)
- Rosa E Torres
- Department of Psychology, Brock University, St. Catharines, ON, Canada
| | - Mallory S Duprey
- Department of Psychology, Brock University, St. Catharines, ON, Canada
| | - Karen L Campbell
- Department of Psychology, Brock University, St. Catharines, ON, Canada
| | - Stephen M Emrich
- Department of Psychology, Brock University, St. Catharines, ON, Canada.
| |
Collapse
|
6
|
Zhao M, Xin Y, Deng H, Zuo Z, Wang X, Bi Y, Liu N. Object color knowledge representation occurs in the macaque brain despite the absence of a developed language system. PLoS Biol 2024; 22:e3002863. [PMID: 39466847 PMCID: PMC11542842 DOI: 10.1371/journal.pbio.3002863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 11/07/2024] [Accepted: 09/21/2024] [Indexed: 10/30/2024] Open
Abstract
Animals guide their behaviors through internal representations of the world in the brain. We aimed to understand how the macaque brain stores such general world knowledge, focusing on object color knowledge. Three functional magnetic resonance imaging (fMRI) experiments were conducted in macaque monkeys: viewing chromatic and achromatic gratings, viewing grayscale images of their familiar fruits and vegetables (e.g., grayscale strawberry), and viewing true- and false-colored objects (e.g., red strawberry and green strawberry). We observed robust object knowledge representations in the color patches, especially the one located around TEO: the activity patterns could classify grayscale pictures of objects based on their memory color and response patterns in these regions could translate between chromatic grating viewing and grayscale object viewing (e.g., red grating-grayscale images of strawberry), such that classifiers trained by viewing chromatic gratings could successfully classify grayscale object images according to their memory colors. Our results showed direct positive evidence of object color memory in macaque monkeys. These results indicate the perceptually grounded knowledge representation as a conservative memory mechanism and open a new avenue to study this particular (semantic) memory representation with macaque models.
Collapse
Affiliation(s)
- Minghui Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Yumeng Xin
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Haoyun Deng
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Zhentao Zuo
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Ning Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
7
|
Lifanov-Carr J, Griffiths BJ, Linde-Domingo J, Ferreira CS, Wilson M, Mayhew SD, Charest I, Wimber M. Reconstructing Spatiotemporal Trajectories of Visual Object Memories in the Human Brain. eNeuro 2024; 11:ENEURO.0091-24.2024. [PMID: 39242212 PMCID: PMC11439564 DOI: 10.1523/eneuro.0091-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 07/03/2024] [Accepted: 08/09/2024] [Indexed: 09/09/2024] Open
Abstract
How the human brain reconstructs, step-by-step, the core elements of past experiences is still unclear. Here, we map the spatiotemporal trajectories along which visual object memories are reconstructed during associative recall. Specifically, we inquire whether retrieval reinstates feature representations in a copy-like but reversed direction with respect to the initial perceptual experience, or alternatively, this reconstruction involves format transformations and regions beyond initial perception. Participants from two cohorts studied new associations between verbs and randomly paired object images, and subsequently recalled the objects when presented with the corresponding verb cue. We first analyze multivariate fMRI patterns to map where in the brain high- and low-level object features can be decoded during perception and retrieval, showing that retrieval is dominated by conceptual features, represented in comparatively late visual and parietal areas. A separately acquired EEG dataset is then used to track the temporal evolution of the reactivated patterns using similarity-based EEG-fMRI fusion. This fusion suggests that memory reconstruction proceeds from anterior frontotemporal to posterior occipital and parietal regions, in line with a conceptual-to-perceptual gradient but only partly following the same trajectories as during perception. Specifically, a linear regression statistically confirms that the sequential activation of ventral visual stream regions is reversed between image perception and retrieval. The fusion analysis also suggests an information relay to frontoparietal areas late during retrieval. Together, the results shed light onto the temporal dynamics of memory recall and the transformations that the information undergoes between the initial experience and its later reconstruction from memory.
Collapse
Affiliation(s)
- Julia Lifanov-Carr
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Benjamin J Griffiths
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Juan Linde-Domingo
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
- Department of Experimental Psychology, Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, 18011 Granada, Spain
- Center for Adaptive Rationality, Max Planck Institute for Human Development, 14195 Berlin, Germany
| | - Catarina S Ferreira
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Martin Wilson
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Stephen D Mayhew
- Institute of Health and Neurodevelopment (IHN), School of Psychology, Aston University, Birmingham B4 7ET, United Kingdom
| | - Ian Charest
- Département de Psychologie, Université de Montréal, Montréal, Quebec H2V 2S9, Canada
| | - Maria Wimber
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
- School of Psychology & Neuroscience and Centre for Cognitive Neuroimaging (CCNi), University of Glasgow, Glasgow G12 8QB, United Kingdom
| |
Collapse
|
8
|
Dirani J, Pylkkänen L. MEG Evidence That Modality-Independent Conceptual Representations Contain Semantic and Visual Features. J Neurosci 2024; 44:e0326242024. [PMID: 38806251 PMCID: PMC11223456 DOI: 10.1523/jneurosci.0326-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 04/22/2024] [Accepted: 05/15/2024] [Indexed: 05/30/2024] Open
Abstract
The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word "cat" both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be semantic, or they might also contain perceptual features. We developed a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data (25 human participants, 15 females, 10 males). We then compared these representations to models representing semantic, sensory, and orthographic features. Results show that modality-independent representations correlate both with semantic and visual representations. There was no evidence that these results were due to picture-specific visual features or orthographic features automatically activated by the stimuli presented in the experiment. These findings support the notion that modality-independent concepts contain both perceptual and semantic representations.
Collapse
Affiliation(s)
- Julien Dirani
- Departments of Psychology, New York University, New York, New York 10003
| | - Liina Pylkkänen
- Departments of Psychology, New York University, New York, New York 10003
- Linguistics, New York University, New York, New York 10003
- NYUAD Research Institute, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates
| |
Collapse
|
9
|
Matoba K, Matsumoto R, Shimotake A, Nakae T, Imamura H, Togo M, Yamao Y, Usami K, Kikuchi T, Yoshida K, Matsuhashi M, Kunieda T, Miyamoto S, Takahashi R, Ikeda A. Basal temporal language area revisited in Japanese language with a language function density map. Cereb Cortex 2024; 34:bhae218. [PMID: 38858838 DOI: 10.1093/cercor/bhae218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 05/04/2024] [Accepted: 05/08/2024] [Indexed: 06/12/2024] Open
Abstract
We revisited the anatomo-functional characteristics of the basal temporal language area (BTLA), first described by Lüders et al. (1986), using electrical cortical stimulation (ECS) in the context of Japanese language and semantic networks. We recruited 11 patients with focal epilepsy who underwent chronic subdural electrode implantation and ECS mapping with multiple language tasks for presurgical evaluation. A semiquantitative language function density map delineated the anatomo-functional characteristics of the BTLA (66 electrodes, mean 3.8 cm from the temporal tip). The ECS-induced impairment probability was higher in the following tasks, listed in a descending order: spoken-word picture matching, picture naming, Kanji word reading, paragraph reading, spoken-verbal command, and Kana word reading. The anterior fusiform gyrus (FG), adjacent anterior inferior temporal gyrus (ITG), and the anterior end where FG and ITG fuse, were characterized by stimulation-induced impairment during visual and auditory tasks requiring verbal output or not, whereas the middle FG was characterized mainly by visual input. The parahippocampal gyrus was the least impaired of the three gyri in the basal temporal area. We propose that the BTLA has a functional gradient, with the anterior part involved in amodal semantic processing and the posterior part, especially the middle FG in unimodal semantic processing.
Collapse
Affiliation(s)
- Kento Matoba
- Division of Neurology, Kobe University Graduate School of Medicine, 7-5-2, Kusunoki-cho, Chuo-ku, Kobe, Hyogo 650-0017, Japan
- Department of Neurology, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Riki Matsumoto
- Division of Neurology, Kobe University Graduate School of Medicine, 7-5-2, Kusunoki-cho, Chuo-ku, Kobe, Hyogo 650-0017, Japan
- Department of Neurology, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Akihiro Shimotake
- Department of Neurology, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Takuro Nakae
- Department of Neurosurgery, Shiga General Hospital, 5-4-30 Moriyama, Moriyama, Shiga 524-0022, Japan
| | - Hisaji Imamura
- Department of Neurology, Fukui Red Cross Hospital, 2-4-1, Tsukimi, Fukui, 918-8011, Japan
| | - Masaya Togo
- Division of Neurology, Kobe University Graduate School of Medicine, 7-5-2, Kusunoki-cho, Chuo-ku, Kobe, Hyogo 650-0017, Japan
| | - Yukihiro Yamao
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Kiyohide Usami
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Takayuki Kikuchi
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Kazumichi Yoshida
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Masao Matsuhashi
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
- Human Brain Research Center, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Takeharu Kunieda
- Department of Neurosurgery, Ehime University Graduate School of Medicine, 454 Shitsukawa, Toon, Ehime, Japan
| | - Susumu Miyamoto
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Ryosuke Takahashi
- Department of Neurology, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Akio Ikeda
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, 54 Kawaharacho, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| |
Collapse
|
10
|
Marques dos Santos JP, Marques dos Santos JD. Explainable artificial intelligence (xAI) in neuromarketing/consumer neuroscience: an fMRI study on brand perception. Front Hum Neurosci 2024; 18:1305164. [PMID: 38584851 PMCID: PMC10995351 DOI: 10.3389/fnhum.2024.1305164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 03/04/2024] [Indexed: 04/09/2024] Open
Abstract
Introduction The research in consumer neuroscience has identified computational methods, particularly artificial intelligence (AI) and machine learning, as a significant frontier for advancement. Previously, we utilized functional magnetic resonance imaging (fMRI) and artificial neural networks (ANNs) to model brain processes related to brand preferences in a paradigm exempted from motor actions. In the current study, we revisit this data, introducing recent advancements in explainable artificial intelligence (xAI) to gain insights into this domain. By integrating fMRI data analysis, machine learning, and xAI, our study aims to search for functional brain networks that support brand perception and, ultimately, search for brain networks that disentangle between preferred and indifferent brands, focusing on the early processing stages. Methods We applied independent component analysis (ICA) to overcome the expected fMRI data's high dimensionality, which raises hurdles in AI applications. We extracted pertinent features from the returned ICs. An ANN is then trained on this data, followed by pruning and retraining processes. We then apply explanation techniques, based on path-weights and Shapley values, to make the network more transparent, explainable, and interpretable, and to obtain insights into the underlying brain processes. Results The fully connected ANN model obtained an accuracy of 54.6%, which dropped to 50.4% after pruning. However, the retraining process allowed it to surpass the fully connected network, achieving an accuracy of 55.9%. The path-weights and Shapley-based analysis concludes that, regarding brand perception, the expected initial participation of the primary visual system is followed. Other brain areas participate in early processing and discriminate between preferred and indifferent brands, such as the cuneal and the lateral occipital cortices. Discussion The most important finding is that a split between processing brands|preferred from brands|indifferent may occur during early processing stages, still in the visual system. However, we found no evidence of a "decision pipeline" that would yield if a brand is preferred or indifferent. The results suggest the existence of a "tagging"-like process in parallel flows in the extrastriate. Network training dynamics aggregate specific processes within the hidden nodes by analyzing the model's hidden layer. This yielded that some nodes contribute to both global brand appraisal and specific brand category classification, shedding light on the neural substrates of decision-making in response to brand stimuli.
Collapse
Affiliation(s)
- José Paulo Marques dos Santos
- Department of Business Administration, University of Maia, Maia, Portugal
- Unit of Experimental Biology, Faculty of Medicine, University of Porto, Porto, Portugal
- LIACC – Artificial Intelligence and Computer Science Laboratory, University of Porto, Porto, Portugal
- NECE-UBI, Research Centre for Business Sciences, University of Beira Interior, Covilhã, Portugal
| | - José Diogo Marques dos Santos
- Faculty of Engineering, University of Porto, Porto, Portugal
- Abel Salazar Biomedical Sciences Institute, University of Porto, Porto, Portugal
| |
Collapse
|
11
|
Shoham A, Grosbard ID, Patashnik O, Cohen-Or D, Yovel G. Using deep neural networks to disentangle visual and semantic information in human perception and memory. Nat Hum Behav 2024:10.1038/s41562-024-01816-9. [PMID: 38332339 DOI: 10.1038/s41562-024-01816-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 12/22/2023] [Indexed: 02/10/2024]
Abstract
Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual-semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual-semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.
Collapse
Affiliation(s)
- Adva Shoham
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel.
| | - Idan Daniel Grosbard
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
- The Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel
| | - Or Patashnik
- The Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel
| | - Daniel Cohen-Or
- The Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel
| | - Galit Yovel
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel.
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
12
|
Zhang Y, Wu W, Mirman D, Hoffman P. Representation of event and object concepts in ventral anterior temporal lobe and angular gyrus. Cereb Cortex 2024; 34:bhad519. [PMID: 38185997 PMCID: PMC10839851 DOI: 10.1093/cercor/bhad519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/07/2023] [Accepted: 12/08/2023] [Indexed: 01/09/2024] Open
Abstract
Semantic knowledge includes understanding of objects and their features and also understanding of the characteristics of events. The hub-and-spoke theory holds that these conceptual representations rely on multiple information sources that are integrated in a central hub in the ventral anterior temporal lobes. The dual-hub theory expands this framework with the claim that the ventral anterior temporal lobe hub is specialized for object representation, while a second hub in angular gyrus is specialized for event representation. To test these ideas, we used representational similarity analysis, univariate and psychophysiological interaction analyses of fMRI data collected while participants processed object and event concepts (e.g. "an apple," "a wedding") presented as images and written words. Representational similarity analysis showed that angular gyrus encoded event concept similarity more than object similarity, although the left angular gyrus also encoded object similarity. Bilateral ventral anterior temporal lobes encoded both object and event concept structure, and left ventral anterior temporal lobe exhibited stronger coding for events. Psychophysiological interaction analysis revealed greater connectivity between left ventral anterior temporal lobe and right pMTG, and between right angular gyrus and bilateral ITG and middle occipital gyrus, for event concepts compared to object concepts. These findings support the specialization of angular gyrus for event semantics, though with some involvement in object coding, but do not support ventral anterior temporal lobe specialization for object concepts.
Collapse
Affiliation(s)
- Yueyang Zhang
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, United Kingdom
| | - Wei Wu
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, United Kingdom
| | - Daniel Mirman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, United Kingdom
| | - Paul Hoffman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, United Kingdom
| |
Collapse
|
13
|
Read J, Delhaye E, Sougné J. Computational models can distinguish the contribution from different mechanisms to familiarity recognition. Hippocampus 2024; 34:36-50. [PMID: 37985213 DOI: 10.1002/hipo.23588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 09/26/2023] [Accepted: 10/28/2023] [Indexed: 11/22/2023]
Abstract
Familiarity is the strange feeling of knowing that something has already been seen in our past. Over the past decades, several attempts have been made to model familiarity using artificial neural networks. Recently, two learning algorithms successfully reproduced the functioning of the perirhinal cortex, a key structure involved during familiarity: Hebbian and anti-Hebbian learning. However, performance of these learning rules is very different from one to another thus raising the question of their complementarity. In this work, we designed two distinct computational models that combined Deep Learning and a Hebbian learning rule to reproduce familiarity on natural images, the Hebbian model and the anti-Hebbian model, respectively. We compared the performance of both models during different simulations to highlight the inner functioning of both learning rules. We showed that the anti-Hebbian model fits human behavioral data whereas the Hebbian model fails to fit the data under large training set sizes. Besides, we observed that only our Hebbian model is highly sensitive to homogeneity between images. Taken together, we interpreted these results considering the distinction between absolute and relative familiarity. With our framework, we proposed a novel way to distinguish the contribution of these familiarity mechanisms to the overall feeling of familiarity. By viewing them as complementary, our two models allow us to make new testable predictions that could be of interest to shed light on the familiarity phenomenon.
Collapse
Affiliation(s)
- John Read
- GIGA Centre de Recherche du Cyclotron In Vivo Imaging, University of Liège, Liège, Belgium
| | - Emma Delhaye
- GIGA Centre de Recherche du Cyclotron In Vivo Imaging, University of Liège, Liège, Belgium
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Liège, Belgium
| | - Jacques Sougné
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Liège, Belgium
- UDI-FPLSE, University of Liège, Liège, Belgium
| |
Collapse
|
14
|
von Seth J, Nicholls VI, Tyler LK, Clarke A. Recurrent connectivity supports higher-level visual and semantic object representations in the brain. Commun Biol 2023; 6:1207. [PMID: 38012301 PMCID: PMC10682037 DOI: 10.1038/s42003-023-05565-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 11/09/2023] [Indexed: 11/29/2023] Open
Abstract
Visual object recognition has been traditionally conceptualised as a predominantly feedforward process through the ventral visual pathway. While feedforward artificial neural networks (ANNs) can achieve human-level classification on some image-labelling tasks, it's unclear whether computational models of vision alone can accurately capture the evolving spatiotemporal neural dynamics. Here, we probe these dynamics using a combination of representational similarity and connectivity analyses of fMRI and MEG data recorded during the recognition of familiar, unambiguous objects. Modelling the visual and semantic properties of our stimuli using an artificial neural network as well as a semantic feature model, we find that unique aspects of the neural architecture and connectivity dynamics relate to visual and semantic object properties. Critically, we show that recurrent processing between the anterior and posterior ventral temporal cortex relates to higher-level visual properties prior to semantic object properties, in addition to semantic-related feedback from the frontal lobe to the ventral temporal lobe between 250 and 500 ms after stimulus onset. These results demonstrate the distinct contributions made by semantic object properties in explaining neural activity and connectivity, highlighting it as a core part of object recognition not fully accounted for by current biologically inspired neural networks.
Collapse
Affiliation(s)
- Jacqueline von Seth
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | | | - Lorraine K Tyler
- Department of Psychology, University of Cambridge, Cambridge, UK
- Cambridge Centre for Ageing and Neuroscience (Cam-CAN), University of Cambridge and MRC Cognition and Brain Sciences Unit, Cambridge, UK
| | - Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, UK.
| |
Collapse
|
15
|
Naspi L, Stensholt C, Karlsson AE, Monge ZA, Cabeza R. Effects of Aging on Successful Object Encoding: Enhanced Semantic Representations Compensate for Impaired Visual Representations. J Neurosci 2023; 43:7337-7350. [PMID: 37673674 PMCID: PMC10621770 DOI: 10.1523/jneurosci.2265-22.2023] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 08/09/2023] [Accepted: 08/13/2023] [Indexed: 09/08/2023] Open
Abstract
Although episodic memory and visual processing decline substantially with healthy aging, semantic knowledge is generally spared. There is evidence that older adults' spared semantic knowledge can support episodic memory. Here, we used functional magnetic resonance imaging (fMRI) combined with representational similarity analyses (RSAs) to examine how novel visual and preexisting semantic representations at encoding predict subjective memory vividness at retrieval. Eighteen young and seventeen older adults (female and male participants) encoded images of objects during fMRI scanning and recalled these images while rating the vividness of their memories. After scanning, participants discriminated between studied images and similar lures. RSA based on a deep convolutional neural network and normative concept feature data were used to link patterns of neural activity during encoding to visual and semantic representations. Relative to young adults, the specificity of activation patterns for visual features was reduced in older adults, consistent with dedifferentiation. However, the specificity of activation patterns for semantic features was enhanced in older adults, consistent with hyperdifferentiation. Despite dedifferentiation, visual representations in early visual cortex (EVC) predicted high memory vividness in both age groups. In contrast, semantic representations in lingual gyrus (LG) and fusiform gyrus (FG) were associated with high memory vividness only in the older adults. Intriguingly, data suggests that older adults with lower specificity of visual representations in combination with higher specificity of semantic representations tended to rate their memories as more vivid. Our findings suggest that memory vividness in aging relies more on semantic representations over anterior regions, potentially compensating for age-related dedifferentiation of visual information in posterior regions.SIGNIFICANCE STATEMENT Normal aging is associated with impaired memory for events while semantic knowledge might even improve. We investigated the effects of aging on the specificity of visual and semantic information in the brain when viewing common objects and how this information enables subsequent memory vividness for these objects. Using functional magnetic resonance imaging (fMRI) combined with modeling of the stimuli we found that visual information was represented with less specificity in older than young adults while still supporting memory vividness. In contrast semantic information supported memory vividness only in older adults and especially in those individuals that had the lowest specificity of visual information. These findings provide evidence for a spared semantic memory system increasingly recruited to compensate for degraded visual representations in older age.
Collapse
Affiliation(s)
- Loris Naspi
- Department of Psychology, Humboldt University of Berlin, Berlin 10117, Germany
| | - Charlotte Stensholt
- Department of Psychology, Humboldt University of Berlin, Berlin 10117, Germany
| | - Anna E Karlsson
- Department of Psychology, Humboldt University of Berlin, Berlin 10117, Germany
| | - Zachary A Monge
- Center for Cognitive Neuroscience, Duke University, Durham, North Carolina 27708
| | - Roberto Cabeza
- Department of Psychology, Humboldt University of Berlin, Berlin 10117, Germany
- Center for Cognitive Neuroscience, Duke University, Durham, North Carolina 27708
| |
Collapse
|
16
|
Abstract
Perception and memory are traditionally thought of as separate cognitive functions, supported by distinct brain regions. The canonical perspective is that perceptual processing of visual information is supported by the ventral visual stream, whereas long-term declarative memory is supported by the medial temporal lobe. However, this modular framework cannot account for the increasingly large body of evidence that reveals a role for early visual areas in long-term recognition memory and a role for medial temporal lobe structures in high-level perceptual processing. In this article, we review relevant research conducted in humans, nonhuman primates, and rodents. We conclude that the evidence is largely inconsistent with theoretical proposals that draw sharp functional boundaries between perceptual and memory systems in the brain. Instead, the weight of the empirical findings is best captured by a representational-hierarchical model that emphasizes differences in content, rather than in cognitive processes within the ventral visual stream and medial temporal lobe.
Collapse
Affiliation(s)
- Chris B Martin
- Department of Psychology, Florida State University, Tallahassee, Florida, USA;
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada;
- Rotman Research Institute, Baycrest Hospital, Toronto, Ontario, Canada
| |
Collapse
|
17
|
Dirani J, Pylkkänen L. The time course of cross-modal representations of conceptual categories. Neuroimage 2023; 277:120254. [PMID: 37391047 DOI: 10.1016/j.neuroimage.2023.120254] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 05/29/2023] [Accepted: 06/27/2023] [Indexed: 07/02/2023] Open
Abstract
To what extent does language production activate cross-modal conceptual representations? In picture naming, we view specific exemplars of concepts and then name them with a label, like "dog". In overt reading, the written word does not express a specific exemplar. Here we used a decoding approach with magnetoencephalography (MEG) to address whether picture naming and overt word reading involve shared representations of superordinate categories (e.g., animal). This addresses a fundamental question about the modality-generality of conceptual representations and their temporal evolution. Crucially, we do this using a language production task that does not require explicit categorization judgment and that controls for word form properties across semantic categories. We trained our models to classify the animal/tool distinction using MEG data of one modality at each time point and then tested the generalization of those models on the other modality. We obtained evidence for the automatic activation of cross-modal semantic category representations for both pictures and words later than their respective modality-specific representations. Cross-modal representations were activated at 150 ms and lasted until around 450 ms. The time course of lexical activation was also assessed revealing that semantic category is represented before lexical access for pictures but after lexical access for words. Notably, this earlier activation of semantic category in pictures occurred simultaneously with visual representations. We thus show evidence for the spontaneous activation of cross-modal semantic categories in picture naming and word reading. These results serve to anchor a more comprehensive spatio-temporal delineation of the semantic feature space during production planning.
Collapse
Affiliation(s)
- Julien Dirani
- Department of Psychology, New York University, New York, NY, 10003, USA.
| | - Liina Pylkkänen
- Department of Psychology, New York University, New York, NY, 10003, USA; Department of Linguistics, New York University, New York, NY, 10003, USA; NYUAD Research Institute, New York University Abu Dhabi, Abu Dhabi, 129188, UAE
| |
Collapse
|
18
|
Bastin C, Delhaye E. Targeting the function of the transentorhinal cortex to identify early cognitive markers of Alzheimer's disease. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023:10.3758/s13415-023-01093-5. [PMID: 37024735 DOI: 10.3758/s13415-023-01093-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/19/2023] [Indexed: 04/08/2023]
Abstract
Initial neuropathology of early Alzheimer's disease accumulates in the transentorhinal cortex. We review empirical data suggesting that tasks assessing cognitive functions supported by the transenthorinal cortex are impaired as early as the preclinical stages of Alzheimer's disease. These tasks span across various domains, including episodic memory, semantic memory, language, and perception. We propose that all tasks sensitive to Alzheimer-related transentorhinal neuropathology commonly rely on representations of entities supporting the processing and discrimination of items having perceptually and conceptually overlapping features. In the future, we suggest a screening tool that is sensitive and specific to very early Alzheimer's disease to probe memory and perceptual discrimination of highly similar entities.
Collapse
Affiliation(s)
- Christine Bastin
- GIGA-Cyclotron Research Centre-In Vivo Imaging, University of Liège, Allée du 6 Août, B30, 4000, Liège, Belgium.
| | - Emma Delhaye
- GIGA-Cyclotron Research Centre-In Vivo Imaging, University of Liège, Allée du 6 Août, B30, 4000, Liège, Belgium
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal
| |
Collapse
|
19
|
Semantic cognition in healthy ageing: Neural signatures of representation and control mechanisms in naming typical and atypical objects. Neuropsychologia 2023; 184:108545. [PMID: 36934809 DOI: 10.1016/j.neuropsychologia.2023.108545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 11/19/2022] [Accepted: 03/16/2023] [Indexed: 03/19/2023]
Abstract
Effective use of conceptual knowledge engages semantic representation and control processes to access information in a goal-driven manner. Neuropsychological findings of patients presenting either degraded knowledge (e.g., semantic dementia) or disrupted control (e.g., semantic aphasia) converge with neuroimaging evidence from young adults, and delineate the neural segregation of representation and control mechanisms. However, there is still scarce research on the neurofunctional underpinnings of such mechanisms in healthy ageing. To address this, we conducted an fMRI study, wherein young and older adults performed a covert naming task of typical and atypical objects. Three main age-related differences were found. As shown by age group and typicality interactions, older adults exhibited overactivation during naming of atypical (e.g., avocado) relative to typical concepts in brain regions associated to semantic representation, including anterior and medial portions of left temporal lobe (respectively, ATL and MTG). This provides evidence for the reorganization of neural activity in these brain regions contingent to the enrichment of semantic repositories in older ages. The medial orbitofrontal gyrus was also overactivated, indicating that the processing of atypical concepts (relative to typical items) taxes additional control resources in the elderly. Increased activation in the inferior frontal gyrus (IFG) was observed in naming typical items (relative to atypical ones), but only for young adults. This suggests that naming typical items (e.g., strawberry) taxes more on control processes in younger ages, presumably due to the semantic competition set by other items that share multiple features with the target (e.g., raspberry, blackberry, cherry). Together, these results reveal the dynamic nature of semantic control interplaying with conceptual representations as people grow older, by indicating that distinct neural bases uphold semantic performance from young to older ages. These findings may be explained by neural compensation mechanisms coming into play to support neurocognitive changes in healthy ageing.
Collapse
|
20
|
Sulpizio S, Arcara G, Lago S, Marelli M, Amenta S. Very early and late form-to-meaning computations during visual word recognition as revealed by electrophysiology. Cortex 2022; 157:167-193. [PMID: 36327746 DOI: 10.1016/j.cortex.2022.07.016] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 10/08/2021] [Accepted: 07/29/2022] [Indexed: 12/15/2022]
Abstract
We used a large-scale data-driven approach to investigate the role of word form in accessing semantics. By using distributional semantic methods and taking advantage of an ERP lexical decision mega-study, we investigated the exact time dynamic of semantic access from printed words as driven by orthography-semantics consistency (OSC) and phonology-semantics consistency (PSC). Generalized Additive Models revealed very early and late OSC-by-PSC interactions, visible at 100 and 400 msec, respectively. This pattern suggests that, during visual word recognition: a) meaning is accessed by means of two distinct and interactive paths - the orthography-to-meaning and the orthography-to-phonology-to-meaning path -, which mutually contribute to recognition since early stages; b) the system may exploit a dual mechanism for semantic access, with early and late effects associated to a fast-coarse and a slow-fine grained semantic analysis, respectively. The results also highlight the high sensitivity of the visual word recognition system to arbitrary form-meaning relations.
Collapse
Affiliation(s)
| | | | - Sara Lago
- IRCCS San Camillo Hospital, Venice, Italy; Padova Neuroscience Center, University of Padova, Italy
| | | | | |
Collapse
|
21
|
Geerligs L, Gözükara D, Oetringer D, Campbell KL, van Gerven M, Güçlü U. A partially nested cortical hierarchy of neural states underlies event segmentation in the human brain. eLife 2022; 11:e77430. [PMID: 36111671 PMCID: PMC9531941 DOI: 10.7554/elife.77430] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 09/14/2022] [Indexed: 11/18/2022] Open
Abstract
A fundamental aspect of human experience is that it is segmented into discrete events. This may be underpinned by transitions between distinct neural states. Using an innovative data-driven state segmentation method, we investigate how neural states are organized across the cortical hierarchy and where in the cortex neural state boundaries and perceived event boundaries overlap. Our results show that neural state boundaries are organized in a temporal cortical hierarchy, with short states in primary sensory regions, and long states in lateral and medial prefrontal cortex. State boundaries are shared within and between groups of brain regions that resemble well-known functional networks. Perceived event boundaries overlap with neural state boundaries across large parts of the cortical hierarchy, particularly when those state boundaries demarcate a strong transition or are shared between brain regions. Taken together, these findings suggest that a partially nested cortical hierarchy of neural states forms the basis of event segmentation.
Collapse
Affiliation(s)
- Linda Geerligs
- Donders Institute for Brain, Cognition and Behaviour, Radboud University NijmegenNijmegenNetherlands
| | - Dora Gözükara
- Donders Institute for Brain, Cognition and Behaviour, Radboud University NijmegenNijmegenNetherlands
| | - Djamari Oetringer
- Donders Institute for Brain, Cognition and Behaviour, Radboud University NijmegenNijmegenNetherlands
| | | | - Marcel van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University NijmegenNijmegenNetherlands
| | - Umut Güçlü
- Donders Institute for Brain, Cognition and Behaviour, Radboud University NijmegenNijmegenNetherlands
| |
Collapse
|
22
|
Abstract
The nematode worm Caenorhabditis elegans has a relatively simple neural system for analysis of information transmission from sensory organ to muscle fiber. Consequently, this study includes an example of a neural circuit from the nematode worm, and a procedure is shown for measuring its information optimality by use of a logic gate model. This approach is useful where the assumptions are applicable for a neural circuit, and also for choosing between competing mathematical hypotheses that explain the function of a neural circuit. In this latter case, the logic gate model can estimate computational complexity and distinguish which of the mathematical models require fewer computations. In addition, the concept of information optimality is generalized to other biological systems, along with an extended discussion of its role in genetic-based pathways of organisms.
Collapse
|
23
|
Balgova E, Diveica V, Walbrin J, Binney RJ. The role of the ventrolateral anterior temporal lobes in social cognition. Hum Brain Mapp 2022; 43:4589-4608. [PMID: 35716023 PMCID: PMC9491293 DOI: 10.1002/hbm.25976] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 05/27/2022] [Accepted: 05/28/2022] [Indexed: 11/24/2022] Open
Abstract
A key challenge for neurobiological models of social cognition is to elucidate whether brain regions are specialised for that domain. In recent years, discussion surrounding the role of anterior temporal regions epitomises such debates; some argue the anterior temporal lobe (ATL) is part of a domain‐specific network for social processing, while others claim it comprises a domain‐general hub for semantic representation. In the present study, we used ATL‐optimised fMRI to map the contribution of different ATL structures to a variety of paradigms frequently used to probe a crucial social ability, namely ‘theory of mind’ (ToM). Using multiple tasks enables a clearer attribution of activation to ToM as opposed to idiosyncratic features of stimuli. Further, we directly explored whether these same structures are also activated by a non‐social task probing semantic representations. We revealed that common to all of the tasks was activation of a key ventrolateral ATL region that is often invisible to standard fMRI. This constitutes novel evidence in support of the view that the ventrolateral ATL contributes to social cognition via a domain‐general role in semantic processing and against claims of a specialised social function.
Collapse
Affiliation(s)
- Eva Balgova
- School of Human and Behavioural Sciences, Bangor University, Gwynedd, Wales, UK
| | - Veronica Diveica
- School of Human and Behavioural Sciences, Bangor University, Gwynedd, Wales, UK
| | - Jon Walbrin
- Faculdade de Psicologia e de Ciências da Educação, Universidade de Coimbra, Portugal
| | - Richard J Binney
- School of Human and Behavioural Sciences, Bangor University, Gwynedd, Wales, UK
| |
Collapse
|
24
|
Abstract
Visual representations of bodies, in addition to those of faces, contribute to the recognition of con- and heterospecifics, to action recognition, and to nonverbal communication. Despite its importance, the neural basis of the visual analysis of bodies has been less studied than that of faces. In this article, I review what is known about the neural processing of bodies, focusing on the macaque temporal visual cortex. Early single-unit recording work suggested that the temporal visual cortex contains representations of body parts and bodies, with the dorsal bank of the superior temporal sulcus representing bodily actions. Subsequent functional magnetic resonance imaging studies in both humans and monkeys showed several temporal cortical regions that are strongly activated by bodies. Single-unit recordings in the macaque body patches suggest that these represent mainly body shape features. More anterior patches show a greater viewpoint-tolerant selectivity for body features, which may reflect a processing principle shared with other object categories, including faces. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Rufin Vogels
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Belgium; .,Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
25
|
Isella V, Rosazza C, Ferri F, Gazzotti M, Impagnatiello V, Mapelli C, Morzenti S, Crivellaro C, Appollonio IM, Ferrarese C. Learning From Mistakes: Cognitive and Metabolic Correlates of Errors on Picture Naming in the Alzheimer’s Disease Spectrum. J Alzheimers Dis 2022; 87:1033-1053. [DOI: 10.3233/jad-220053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Background: Analysis of subtypes of picture naming errors produced by patients with Alzheimer’s disease (AD) have seldom been investigated yet may clarify the cognitive and neural underpinnings of naming in the AD spectrum. Objective: To elucidate the neurocognitive bases of picture naming in AD through a qualitative analysis of errors. Methods: Over 1000 naming errors produced by 70 patients with amnestic, visuospatial, linguistic, or frontal AD were correlated with general cognitive tests and with distribution of hypometabolism on FDG-PET. Results: Principal component analysis identified 1) a Visual processing factor clustering visuospatial tests and unrecognized stimuli, pure visual errors and visual-semantic errors, associated with right parieto-occipital hypometabolism; 2) a Concept-Lemma factor grouping language tests and anomias, circumlocutions, superordinates, and coordinates, correlated with left basal temporal hypometabolism; 3) a Lemma-Phonology factor including the digit span and phonological errors, linked with left temporo-parietal hypometabolism. Regression of brain metabolism on individual errors showed that errors due to impairment of basic and higher-order processing of object visual attributes or of their interaction with semantics, were related with bilateral occipital and left occipito-temporal dysfunction. Omissions and superordinates were linked to degradation of broad and basic concepts in the left basal temporal cortex. Semantic-lexical errors derived from faulty semantically- and phonologically-driven lexical retrieval in the left superior and middle temporal gyri. Generation of nonwords was underpinned by of phonological impairment within the left inferior parietal cortex. Conclusion: Analysis of individual naming errors allowed to outline a comprehensive anatomo-functional model of picture naming in classical and atypical AD.
Collapse
Affiliation(s)
- Valeria Isella
- Department of Neurology, S. Gerardo Hospital, Monza, University of Milano - Bicocca, Italy
- NeuroMI, University of Milano - Bicocca, Italy
| | - Cristina Rosazza
- Dipartimento di Studi Umanistici (DISTUM), Università degli Studi di Urbino Carlo Bo, Urbino, Italy
- Neuroradiology Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Francesca Ferri
- Department of Neurology, S. Gerardo Hospital, Monza, University of Milano - Bicocca, Italy
- NeuroMI, University of Milano - Bicocca, Italy
| | - Maria Gazzotti
- Department of Neurology, S. Gerardo Hospital, Monza, University of Milano - Bicocca, Italy
| | | | - Cristina Mapelli
- Department of Neurology, S. Gerardo Hospital, Monza, University of Milano - Bicocca, Italy
- NeuroMI, University of Milano - Bicocca, Italy
| | - Sabrina Morzenti
- Medical Physics, S. Gerardo Hospital, Monza, Italy
- NeuroMI, University of Milano - Bicocca, Italy
| | - Cinzia Crivellaro
- Nuclear Medicine, S. Gerardo Hospital, Monza, Italy
- NeuroMI, University of Milano - Bicocca, Italy
| | - Ildebrando M. Appollonio
- Department of Neurology, S. Gerardo Hospital, Monza, University of Milano - Bicocca, Italy
- NeuroMI, University of Milano - Bicocca, Italy
| | - Carlo Ferrarese
- Department of Neurology, S. Gerardo Hospital, Monza, University of Milano - Bicocca, Italy
- NeuroMI, University of Milano - Bicocca, Italy
| |
Collapse
|
26
|
Abraham A. How We Tell Apart Fiction from Reality. AMERICAN JOURNAL OF PSYCHOLOGY 2022. [DOI: 10.5406/19398298.135.1.01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
The human ability to tell apart reality from fiction is intriguing. Through a range of media, such as novels and movies, we are able to readily engage in fictional worlds and experience alternative realities. Yet even when we are completely immersed and emotionally engaged within these worlds, we have little difficulty in leaving the fictional landscapes and getting back to the day-to-day of our own world. How are we able to do this? How do we acquire our understanding of our real world? How is this similar to and different from the development of our knowledge of fictional worlds? In exploring these questions, this article makes the case for a novel multilevel explanation (called BLINCS) of our implicit understanding of the reality–fiction distinction, namely that it is derived from the fact that the worlds of fiction, relative to reality, are bounded, inference-light, curated, and sparse.
Collapse
|
27
|
Skocypec RM, Peterson MA. Semantic Expectation Effects on Object Detection: Using Figure Assignment to Elucidate Mechanisms. Vision (Basel) 2022; 6:vision6010019. [PMID: 35324604 PMCID: PMC8953613 DOI: 10.3390/vision6010019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 03/02/2022] [Accepted: 03/15/2022] [Indexed: 11/16/2022] Open
Abstract
Recent evidence suggesting that object detection is improved following valid rather than invalid labels implies that semantics influence object detection. It is not clear, however, whether the results index object detection or feature detection. Further, because control conditions were absent and labels and objects were repeated multiple times, the mechanisms are unknown. We assessed object detection via figure assignment, whereby objects are segmented from backgrounds. Masked bipartite displays depicting a portion of a mono-oriented object (a familiar configuration) on one side of a central border were shown once only for 90 or 100 ms. Familiar configuration is a figural prior. Accurate detection was indexed by reports of an object on the familiar configuration side of the border. Compared to control experiments without labels, valid labels improved accuracy and reduced response times (RTs) more for upright than inverted objects (Studies 1 and 2). Invalid labels denoting different superordinate-level objects (DSC; Study 1) or same superordinate-level objects (SSC; Study 2) reduced accuracy for upright displays only. Orientation dependency indicates that effects are mediated by activated object representations rather than features which are invariant over orientation. Following invalid SSC labels (Study 2), accurate detection RTs were longer than control for both orientations, implicating conflict between semantic representations that had to be resolved before object detection. These results demonstrate that object detection is not just affected by semantics, it entails semantics.
Collapse
Affiliation(s)
- Rachel M. Skocypec
- Visual Perception Lab, Department of Psychology, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Cognitive Science Program, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Correspondence: (R.M.S.); (M.A.P.)
| | - Mary A. Peterson
- Visual Perception Lab, Department of Psychology, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Cognitive Science Program, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Correspondence: (R.M.S.); (M.A.P.)
| |
Collapse
|
28
|
Perceptual and Semantic Representations at Encoding Contribute to True and False Recognition of Objects. J Neurosci 2021; 41:8375-8389. [PMID: 34413205 DOI: 10.1523/jneurosci.0677-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 06/30/2021] [Accepted: 07/28/2021] [Indexed: 11/21/2022] Open
Abstract
When encoding new episodic memories, visual and semantic processing is proposed to make distinct contributions to accurate memory and memory distortions. Here, we used fMRI and preregistered representational similarity analysis to uncover the representations that predict true and false recognition of unfamiliar objects. Two semantic models captured coarse-grained taxonomic categories and specific object features, respectively, while two perceptual models embodied low-level visual properties. Twenty-eight female and male participants encoded images of objects during fMRI scanning, and later had to discriminate studied objects from similar lures and novel objects in a recognition memory test. Both perceptual and semantic models predicted true memory. When studied objects were later identified correctly, neural patterns corresponded to low-level visual representations of these object images in the early visual cortex, lingual, and fusiform gyri. In a similar fashion, alignment of neural patterns with fine-grained semantic feature representations in the fusiform gyrus also predicted true recognition. However, emphasis on coarser taxonomic representations predicted forgetting more anteriorly in the anterior ventral temporal cortex, left inferior frontal gyrus and, in an exploratory analysis, left perirhinal cortex. In contrast, false recognition of similar lure objects was associated with weaker visual analysis posteriorly in early visual and left occipitotemporal cortex. The results implicate multiple perceptual and semantic representations in successful memory encoding and suggest that fine-grained semantic as well as visual analysis contributes to accurate later recognition, while processing visual image detail is critical for avoiding false recognition errors.SIGNIFICANCE STATEMENT People are able to store detailed memories of many similar objects. We offer new insights into the encoding of these specific memories by combining fMRI with explicit models of how image properties and object knowledge are represented in the brain. When people processed fine-grained visual properties in occipital and posterior temporal cortex, they were more likely to recognize the objects later and less likely to falsely recognize similar objects. In contrast, while object-specific feature representations in fusiform gyrus predicted accurate memory, coarse-grained categorical representations in frontal and temporal regions predicted forgetting. The data provide the first direct tests of theoretical assumptions about encoding true and false memories, suggesting that semantic representations contribute to specific memories as well as errors.
Collapse
|
29
|
Deng Y, Wang Y, Qiu C, Hu Z, Sun W, Gong Y, Zhao X, He W, Cao L. A Chinese Conceptual Semantic Feature Dataset (CCFD). Behav Res Methods 2021; 53:1697-1709. [PMID: 33532892 DOI: 10.3758/s13428-020-01525-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/12/2020] [Indexed: 11/08/2022]
Abstract
Memory and language are important high-level cognitive functions of humans, and the study of conceptual representation of the human brain is a key approach to reveal the principles of cognition. However, this research is often constrained by the availability of stimulus materials. The research on concept representation often needs to be based on a standardized and large-scale database of conceptual semantic features. Although Western scholars have established a variety of English conceptual semantic feature datasets, there is still a lack of a comprehensive Chinese version. In the present study, a Chinese Conceptual semantic Feature Dataset (CCFD) was established with 1,410 concepts including their semantic features and the similarity between concepts. The concepts were grouped into 28 subordinate categories and seven superior categories artificially. The results showed that concepts within the same category were closer to each other, while concepts between categories were farther apart. The CCFD proposed in this study can provide stimulation materials and data support for related research fields. All the data and supplementary materials can be found at https://osf.io/ug5dt/ .
Collapse
Affiliation(s)
- Yaling Deng
- State Key Laboratory of Media Convergence and Communication, Communication University of China, No.1 of Dingfuzhuang East Street, Chaoyang District, Beijing, China.
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, 100024, China.
| | - Ye Wang
- State Key Laboratory of Media Convergence and Communication, Communication University of China, No.1 of Dingfuzhuang East Street, Chaoyang District, Beijing, China
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, 100024, China
| | - Chenyang Qiu
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, 100024, China
| | - Zhenchao Hu
- TV School, Communication University of China, Beijing, 100024, China
| | - Wenyang Sun
- Animation and Digital Arts school, Communication University of China, Beijing, 100024, China
| | - Yanzhu Gong
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, 100024, China
| | - Xue Zhao
- College of Humanities, Communication University of China, Beijing, 100024, China
| | - Wei He
- College of Humanities, Communication University of China, Beijing, 100024, China
| | - Lihong Cao
- State Key Laboratory of Media Convergence and Communication, Communication University of China, No.1 of Dingfuzhuang East Street, Chaoyang District, Beijing, China.
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, 100024, China.
- State Key Laboratory of Mathematical Engineering and Advanced Computing, Wuxi, 214125, China.
| |
Collapse
|
30
|
Asyraff A, Lemarchand R, Tamm A, Hoffman P. Stimulus-independent neural coding of event semantics: Evidence from cross-sentence fMRI decoding. Neuroimage 2021; 236:118073. [PMID: 33878380 PMCID: PMC8270886 DOI: 10.1016/j.neuroimage.2021.118073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 04/06/2021] [Accepted: 04/11/2021] [Indexed: 11/25/2022] Open
Abstract
Multivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of simple events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical structure, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance.
Collapse
Affiliation(s)
- Aliff Asyraff
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Rafael Lemarchand
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Andres Tamm
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
| | - Paul Hoffman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK.
| |
Collapse
|
31
|
Fiorilli J, Bos JJ, Grande X, Lim J, Düzel E, Pennartz CMA. Reconciling the object and spatial processing views of the perirhinal cortex through task-relevant unitization. Hippocampus 2021; 31:737-755. [PMID: 33523577 PMCID: PMC8359385 DOI: 10.1002/hipo.23304] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 11/27/2020] [Accepted: 01/02/2021] [Indexed: 12/21/2022]
Abstract
The perirhinal cortex is situated on the border between sensory association cortex and the hippocampal formation. It serves an important function as a transition area between the sensory neocortex and the medial temporal lobe. While the perirhinal cortex has traditionally been associated with object coding and the "what" pathway of the temporal lobe, current evidence suggests a broader function of the perirhinal cortex in solving feature ambiguity and processing complex stimuli. Besides fulfilling functions in object coding, recent neurophysiological findings in freely moving rodents indicate that the perirhinal cortex also contributes to spatial and contextual processing beyond individual sensory modalities. Here, we address how these two opposing views on perirhinal cortex-the object-centered and spatial-contextual processing hypotheses-may be reconciled. The perirhinal cortex is consistently recruited when different features can be merged perceptually or conceptually into a single entity. Features that are unitized in these entities include object information from multiple sensory domains, reward associations, semantic features and spatial/contextual associations. We propose that the same perirhinal network circuits can be flexibly deployed for multiple cognitive functions, such that the perirhinal cortex performs similar unitization operations on different types of information, depending on behavioral demands and ranging from the object-related domain to spatial, contextual and semantic information.
Collapse
Affiliation(s)
- Julien Fiorilli
- Cognitive and Systems Neuroscience Group, SILS Center for NeuroscienceUniversity of AmsterdamAmsterdamThe Netherlands
- Research Priority Area Brain and CognitionUniversity of AmsterdamAmsterdamThe Netherlands
| | - Jeroen J. Bos
- Cognitive and Systems Neuroscience Group, SILS Center for NeuroscienceUniversity of AmsterdamAmsterdamThe Netherlands
- Research Priority Area Brain and CognitionUniversity of AmsterdamAmsterdamThe Netherlands
- Donders Institute for Brain, Cognition and BehaviorRadboud University and Radboud University Medical CentreNijmegenThe Netherlands
| | - Xenia Grande
- Institute of Cognitive Neurology and Dementia ResearchOtto‐von‐Guericke University MagdeburgMagdeburgGermany
- German Center for Neurodegenerative DiseasesMagdeburgGermany
| | - Judith Lim
- Cognitive and Systems Neuroscience Group, SILS Center for NeuroscienceUniversity of AmsterdamAmsterdamThe Netherlands
- Research Priority Area Brain and CognitionUniversity of AmsterdamAmsterdamThe Netherlands
| | - Emrah Düzel
- Institute of Cognitive Neurology and Dementia ResearchOtto‐von‐Guericke University MagdeburgMagdeburgGermany
- German Center for Neurodegenerative DiseasesMagdeburgGermany
- Institute of Cognitive NeuroscienceUniversity College LondonLondonUK
| | - Cyriel M. A. Pennartz
- Cognitive and Systems Neuroscience Group, SILS Center for NeuroscienceUniversity of AmsterdamAmsterdamThe Netherlands
- Research Priority Area Brain and CognitionUniversity of AmsterdamAmsterdamThe Netherlands
| |
Collapse
|
32
|
Borghesani V, Dale CL, Lukic S, Hinkley LBN, Lauricella M, Shwe W, Mizuiri D, Honma S, Miller Z, Miller B, Houde JF, Gorno-Tempini ML, Nagarajan SS. Neural dynamics of semantic categorization in semantic variant of primary progressive aphasia. eLife 2021; 10:e63905. [PMID: 34155973 PMCID: PMC8241439 DOI: 10.7554/elife.63905] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Accepted: 06/21/2021] [Indexed: 12/28/2022] Open
Abstract
Semantic representations are processed along a posterior-to-anterior gradient reflecting a shift from perceptual (e.g., it has eight legs) to conceptual (e.g., venomous spiders are rare) information. One critical region is the anterior temporal lobe (ATL): patients with semantic variant primary progressive aphasia (svPPA), a clinical syndrome associated with ATL neurodegeneration, manifest a deep loss of semantic knowledge. We test the hypothesis that svPPA patients perform semantic tasks by over-recruiting areas implicated in perceptual processing. We compared MEG recordings of svPPA patients and healthy controls during a categorization task. While behavioral performance did not differ, svPPA patients showed indications of greater activation over bilateral occipital cortices and superior temporal gyrus, and inconsistent engagement of frontal regions. These findings suggest a pervasive reorganization of brain networks in response to ATL neurodegeneration: the loss of this critical hub leads to a dysregulated (semantic) control system, and defective semantic representations are seemingly compensated via enhanced perceptual processing.
Collapse
Affiliation(s)
- V Borghesani
- Memory and Aging Center, Department of Neurology, University of California, San FranciscoSan FranciscoUnited States
| | - CL Dale
- Department of Radiology and Biomedical Imaging, University of California, San FranciscoSan FranciscoUnited States
| | - S Lukic
- Memory and Aging Center, Department of Neurology, University of California, San FranciscoSan FranciscoUnited States
| | - LBN Hinkley
- Department of Radiology and Biomedical Imaging, University of California, San FranciscoSan FranciscoUnited States
| | - M Lauricella
- Memory and Aging Center, Department of Neurology, University of California, San FranciscoSan FranciscoUnited States
| | - W Shwe
- Memory and Aging Center, Department of Neurology, University of California, San FranciscoSan FranciscoUnited States
| | - D Mizuiri
- Department of Radiology and Biomedical Imaging, University of California, San FranciscoSan FranciscoUnited States
| | - S Honma
- Department of Radiology and Biomedical Imaging, University of California, San FranciscoSan FranciscoUnited States
| | - Z Miller
- Memory and Aging Center, Department of Neurology, University of California, San FranciscoSan FranciscoUnited States
| | - B Miller
- Memory and Aging Center, Department of Neurology, University of California, San FranciscoSan FranciscoUnited States
| | - JF Houde
- Department of Otolaryngology, University of California, San FranciscoSan FranciscoUnited States
| | - ML Gorno-Tempini
- Memory and Aging Center, Department of Neurology, University of California, San FranciscoSan FranciscoUnited States
- Department of Neurology, Dyslexia Center University of California, San FranciscoSan FranciscoUnited States
| | - SS Nagarajan
- Department of Radiology and Biomedical Imaging, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
33
|
The visual and semantic features that predict object memory: Concept property norms for 1,000 object images. Mem Cognit 2021; 49:712-731. [PMID: 33469881 PMCID: PMC8081674 DOI: 10.3758/s13421-020-01130-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2020] [Indexed: 11/08/2022]
Abstract
Humans have a remarkable fidelity for visual long-term memory, and yet the composition of these memories is a longstanding debate in cognitive psychology. While much of the work on long-term memory has focused on processes associated with successful encoding and retrieval, more recent work on visual object recognition has developed a focus on the memorability of specific visual stimuli. Such work is engendering a view of object representation as a hierarchical movement from low-level visual representations to higher level categorical organization of conceptual representations. However, studies on object recognition often fail to account for how these high- and low-level features interact to promote distinct forms of memory. Here, we use both visual and semantic factors to investigate their relative contributions to two different forms of memory of everyday objects. We first collected normative visual and semantic feature information on 1,000 object images. We then conducted a memory study where we presented these same images during encoding (picture target) on Day 1, and then either a Lexical (lexical cue) or Visual (picture cue) memory test on Day 2. Our findings indicate that: (1) higher level visual factors (via DNNs) and semantic factors (via feature-based statistics) make independent contributions to object memory, (2) semantic information contributes to both true and false memory performance, and (3) factors that predict object memory depend on the type of memory being tested. These findings help to provide a more complete picture of what factors influence object memorability. These data are available online upon publication as a public resource.
Collapse
|
34
|
Friedman R. Themes of advanced information processing in the primate brain. AIMS Neurosci 2020; 7:373-388. [PMID: 33263076 PMCID: PMC7701368 DOI: 10.3934/neuroscience.2020023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 10/09/2020] [Indexed: 11/30/2022] Open
Abstract
Here is a review of several empirical examples of information processing that occur in the primate cerebral cortex. These include visual processing, object identification and perception, information encoding, and memory. Also, there is a discussion of the higher scale neural organization, mainly theoretical, which suggests hypotheses on how the brain internally represents objects. Altogether they support the general attributes of the mechanisms of brain computation, such as efficiency, resiliency, data compression, and a modularization of neural function and their pathways. Moreover, the specific neural encoding schemes are expectedly stochastic, abstract and not easily decoded by theoretical or empirical approaches.
Collapse
Affiliation(s)
- Robert Friedman
- Department of Biological Sciences, University of South Carolina, Columbia 29208, USA
| |
Collapse
|
35
|
Theta rhythm supports hippocampus-dependent integrative encoding in schematic/semantic memory networks. Neuroimage 2020; 226:117558. [PMID: 33246130 DOI: 10.1016/j.neuroimage.2020.117558] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 11/06/2020] [Accepted: 11/08/2020] [Indexed: 11/23/2022] Open
Abstract
Integrating new information into existing schematic/semantic structures of knowledge is the basis of learning in our everyday life as it enables structured representation of information and goal-directed behaviour in an ever-changing environment. However, how schematic/semantic mnemonic structures aid the integration of novel elements remains poorly understood. Here, we showed that the ability to integrate novel picture information into learned structures of picture associations that overlapped by the same picture scene (i.e., simple network) or by a conceptually related picture scene (i.e., schematic/semantic network) is hippocampus-dependent, as patients with lesions at the medial temporal lobe (including the hippocampus) were impaired in inferring novel relations between pictures within these memory networks. We also found more persistent and widespread scalp EEG theta oscillations (3-5 Hz) while participants integrated novel pictures into schematic/semantic memory networks than into simple networks. On the other hand, greater neural similarity was observed between EEG patterns elicited by novel and related events within simple networks than between novel and related events within schematic/semantic memory networks. These findings have important implications for our understanding of the neural mechanisms that support the development and organization of structures of knowledge.
Collapse
|
36
|
Abstract
Meaning has traditionally been regarded as a problem for philosophers and psychologists. Advances in cognitive science since the early 1960s, however, broadened discussions of meaning, or more technically, the semantics of perceptions, representations, and/or actions, into biology and computer science. Here, we review the notion of “meaning” as it applies to living systems, and argue that the question of how living systems create meaning unifies the biological and cognitive sciences across both organizational and temporal scales.
Collapse
|
37
|
Monk AM, Barnes GR, Maguire EA. The Effect of Object Type on Building Scene Imagery-an MEG Study. Front Hum Neurosci 2020; 14:592175. [PMID: 33240069 PMCID: PMC7683518 DOI: 10.3389/fnhum.2020.592175] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 10/09/2020] [Indexed: 12/28/2022] Open
Abstract
Previous studies have reported that some objects evoke a sense of local three-dimensional space (space-defining; SD), while others do not (space-ambiguous; SA), despite being imagined or viewed in isolation devoid of a background context. Moreover, people show a strong preference for SD objects when given a choice of objects with which to mentally construct scene imagery. When deconstructing scenes, people retain significantly more SD objects than SA objects. It, therefore, seems that SD objects might enjoy a privileged role in scene construction. In the current study, we leveraged the high temporal resolution of magnetoencephalography (MEG) to compare the neural responses to SD and SA objects while they were being used to build imagined scene representations, as this has not been examined before using neuroimaging. On each trial, participants gradually built a scene image from three successive auditorily-presented object descriptions and an imagined 3D space. We then examined the neural dynamics associated with the points during scene construction when either SD or SA objects were being imagined. We found that SD objects elicited theta changes relative to SA objects in two brain regions, the right ventromedial prefrontal cortex (vmPFC) and the right superior temporal gyrus (STG). Furthermore, using dynamic causal modeling, we observed that the vmPFC drove STG activity. These findings may indicate that SD objects serve to activate schematic and conceptual knowledge in vmPFC and STG upon which scene representations are then built.
Collapse
Affiliation(s)
- Anna M Monk
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Gareth R Barnes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Eleanor A Maguire
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
38
|
Vision at a glance: The role of attention in processing object-to-object categorical relations. Atten Percept Psychophys 2020; 82:671-688. [PMID: 31907840 DOI: 10.3758/s13414-019-01940-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When viewing a scene at a glance, the visual and categorical relations between objects in the scene are extracted rapidly. In the present study, the involvement of spatial attention in the processing of such relations was investigated. Participants performed a category detection task (e.g., "is there an animal") on briefly flashed object pairs. In one condition, visual attention spanned both stimuli, and in another, attention was focused on a single object while its counterpart object served as a task-irrelevant distractor. The results showed that when participants attended to both objects, a categorical relation effect was obtained (Exp. 1). Namely, latencies were shorter to objects from the same category than to those from different superordinate categories (e.g., clothes, vehicles), even if categories were not prioritized by the task demands. Focusing attention on only one of two stimuli, however, largely eliminated this effect (Exp. 2). Some relational processing was seen when categories were narrowed to the basic level and were highly distinct from each other (Exp. 3), implying that categorical relational processing necessitates attention, unless the unattended input is highly predictable. Critically, when a prioritized (to-be-detected) object category, positioned in a distractor's location, differed from an attended object, a robust distraction effect was consistently observed, regardless of category homogeneity and/or of response conflict factors (Exp. 4). This finding suggests that object relations that involve stimuli that are highly relevant to the task settings may survive attentional deprivation at the distractor location. The involvement of spatial attention in object-to-object categorical processing is most critical in situations that include wide categories that are irrelevant to one's current goals.
Collapse
|
39
|
Yılmaz Ö, Çelik E, Çukur T. Informed feature regularization in voxelwise modeling for naturalistic fMRI experiments. Eur J Neurosci 2020; 52:3394-3410. [PMID: 32343012 PMCID: PMC9748846 DOI: 10.1111/ejn.14760] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 03/18/2020] [Accepted: 04/21/2020] [Indexed: 12/16/2022]
Abstract
Voxelwise modeling is a powerful framework to predict single-voxel functional selectivity for the stimulus features that exist in complex natural stimuli. Yet, because VM disregards potential correlations across stimulus features or neighboring voxels, it may yield suboptimal sensitivity in measuring functional selectivity in the presence of high levels of measurement noise. Here, we introduce a novel voxelwise modeling approach that simultaneously utilizes stimulus correlations in model features and response correlations among voxel neighborhoods. The proposed method performs feature and spatial regularization while still generating single-voxel response predictions. We demonstrated the performance of our approach on a functional magnetic resonance imaging dataset from a natural vision experiment. Compared to VM, the proposed method yields clear improvements in prediction performance, together with increased feature coherence and spatial coherence of voxelwise models. Overall, the proposed method can offer improved sensitivity in modeling of single voxels in naturalistic functional magnetic resonance imaging experiments.
Collapse
Affiliation(s)
- Özgür Yılmaz
- National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey,Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Turkey
| | - Emin Çelik
- National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey,Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey
| | - Tolga Çukur
- National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey,Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Turkey,Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey
| |
Collapse
|
40
|
The Influence of Object-Color Knowledge on Emerging Object Representations in the Brain. J Neurosci 2020; 40:6779-6789. [PMID: 32703903 DOI: 10.1523/jneurosci.0158-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 07/13/2020] [Accepted: 07/15/2020] [Indexed: 11/21/2022] Open
Abstract
The ability to rapidly and accurately recognize complex objects is a crucial function of the human visual system. To recognize an object, we need to bind incoming visual features, such as color and form, together into cohesive neural representations and integrate these with our preexisting knowledge about the world. For some objects, typical color is a central feature for recognition; for example, a banana is typically yellow. Here, we applied multivariate pattern analysis on time-resolved neuroimaging (MEG) data to examine how object-color knowledge affects emerging object representations over time. Our results from 20 participants (11 female) show that the typicality of object-color combinations influences object representations, although not at the initial stages of object and color processing. We find evidence that color decoding peaks later for atypical object-color combinations compared with typical object-color combinations, illustrating the interplay between processing incoming object features and stored object knowledge. Together, these results provide new insights into the integration of incoming visual information with existing conceptual object knowledge.SIGNIFICANCE STATEMENT To recognize objects, we have to be able to bind object features, such as color and shape, into one coherent representation and compare it with stored object knowledge. The MEG data presented here provide novel insights about the integration of incoming visual information with our knowledge about the world. Using color as a model to understand the interaction between seeing and knowing, we show that there is a unique pattern of brain activity for congruently colored objects (e.g., a yellow banana) relative to incongruently colored objects (e.g., a red banana). This effect of object-color knowledge only occurs after single object features are processed, demonstrating that conceptual knowledge is accessed relatively late in the visual processing hierarchy.
Collapse
|
41
|
Clarke A. Dynamic activity patterns in the anterior temporal lobe represents object semantics. Cogn Neurosci 2020; 11:111-121. [PMID: 32249714 PMCID: PMC7446031 DOI: 10.1080/17588928.2020.1742678] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 02/07/2020] [Indexed: 02/07/2023]
Abstract
The anterior temporal lobe (ATL) is considered a crucial area for the representation of transmodal concepts. Recent evidence suggests that specific regions within the ATL support the representation of individual object concepts, as shown by studies combining multivariate analysis methods and explicit measures of semantic knowledge. This research looks to further our understanding by probing conceptual representations at a spatially and temporally resolved neural scale. Representational similarity analysis was applied to human intracranial recordings from anatomically defined lateral to medial ATL sub-regions. Neural similarity patterns were tested against semantic similarity measures, where semantic similarity was defined by a hybrid corpus-based and feature-based approach. Analyses show that the perirhinal cortex, in the medial ATL, significantly related to semantic effects around 200 to 400 ms, and were greater than more lateral ATL regions. Further, semantic effects were present in low frequency (theta and alpha) oscillatory phase signals. These results provide converging support that more medial regions of the ATL support the representation of basic-level visual object concepts within the first 400 ms, and provide a bridge between prior fMRI and MEG work by offering detailed evidence for the presence of conceptual representations within the ATL.
Collapse
Affiliation(s)
- Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
42
|
Roles of Category, Shape, and Spatial Frequency in Shaping Animal and Tool Selectivity in the Occipitotemporal Cortex. J Neurosci 2020; 40:5644-5657. [PMID: 32527983 PMCID: PMC7363473 DOI: 10.1523/jneurosci.3064-19.2020] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2019] [Revised: 05/29/2020] [Accepted: 06/02/2020] [Indexed: 11/21/2022] Open
Abstract
Does the nature of representation in the category-selective regions in the occipitotemporal cortex reflect visual or conceptual properties? Previous research showed that natural variability in visual features across categories, quantified by image gist statistics, is highly correlated with the different neural responses observed in the occipitotemporal cortex. Using fMRI, we examined whether category selectivity for animals and tools would remain, when image gist statistics were comparable across categories. Critically, we investigated how category, shape, and spatial frequency may contribute to the category selectivity in the animal- and tool-selective regions. Female and male human observers viewed low- or high-passed images of round or elongated animals and tools that shared comparable gist statistics in the main experiment, and animal and tool images of naturally varied gist statistics in a separate localizer. Univariate analysis revealed robust category-selective responses for images with comparable gist statistics across categories. Successful classification for category (animals/tools), shape (round/elongated), and spatial frequency (low/high) was also observed, with highest classification accuracy for category. Representational similarity analyses further revealed that the activation patterns in the animal-selective regions were most correlated with a model that represents only animal information, whereas the activation patterns in the tool-selective regions were most correlated with a model that represents only tool information, suggesting that these regions selectively represent information of only animals or tools. Together, in addition to visual features, the distinction between animal and tool representations in the occipitotemporal cortex is likely shaped by higher-level conceptual influences such as categorization or interpretation of visual inputs. SIGNIFICANCE STATEMENT Since different categories often vary systematically in both visual and conceptual features, it remains unclear what kinds of information determine category-selective responses in the occipitotemporal cortex. To minimize the influences of low- and mid-level visual features, here we used a diverse image set of animals and tools that shared comparable gist statistics. We manipulated category (animals/tools), shape (round/elongated), and spatial frequency (low/high), and found that the representational content of the animal- and tool-selective regions is primarily determined by their preferred categories only, regardless of shape or spatial frequency. Our results show that category-selective responses in the occipitotemporal cortex are influenced by higher-level processing such as categorization or interpretation of visual inputs, and highlight the specificity in these category-selective regions.
Collapse
|
43
|
Student Teachers’ and Teacher Educators’ Professional Vision: Findings from an Eye Tracking Study. EDUCATIONAL PSYCHOLOGY REVIEW 2020. [DOI: 10.1007/s10648-020-09535-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
44
|
Duff MC, Covington NV, Hilverman C, Cohen NJ. Semantic Memory and the Hippocampus: Revisiting, Reaffirming, and Extending the Reach of Their Critical Relationship. Front Hum Neurosci 2020; 13:471. [PMID: 32038203 PMCID: PMC6993580 DOI: 10.3389/fnhum.2019.00471] [Citation(s) in RCA: 95] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Accepted: 12/23/2019] [Indexed: 11/22/2022] Open
Abstract
Since Tulving proposed a distinction in memory between semantic and episodic memory, considerable effort has been directed towards understanding their similar and unique features. Of particular interest has been the extent to which semantic and episodic memory have a shared dependence on the hippocampus. In contrast to the definitive evidence for the link between hippocampus and episodic memory, the role of the hippocampus in semantic memory has been a topic of considerable debate. This debate stems, in part, from highly variable reports of new semantic memory learning in amnesia ranging from profound impairment to full preservation, and various degrees of deficit and ability in between. More recently, a number of significant advances in experimental methods have occurred, alongside new provocative data on the role of the hippocampus in semantic memory, making this an ideal moment to revisit this debate, to re-evaluate data, methods, and theories, and to synthesize new findings. In line with these advances, this review has two primary goals. First, we provide a historical lens with which to reevaluate and contextualize the literature on semantic memory and the hippocampus. The second goal of this review is to provide a synthesis of new findings on the role of the hippocampus and semantic memory. With the perspective of time and this critical review, we arrive at the interpretation that the hippocampus does indeed make necessary contributions to semantic memory. We argue that semantic memory, like episodic memory, is a highly flexible, (re)constructive, relational and multimodal system, and that there is value in developing methods and materials that fully capture this depth and richness to facilitate comparisons to episodic memory. Such efforts will be critical in addressing questions regarding the cognitive and neural (inter)dependencies among forms of memory, and the role that these forms of memory play in support of cognition more broadly. Such efforts also promise to advance our understanding of how words, concepts, and meaning, as well as episodes and events, are instantiated and maintained in memory and will yield new insights into our two most quintessentially human abilities: memory and language.
Collapse
Affiliation(s)
- Melissa C Duff
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Natalie V Covington
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Caitlin Hilverman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Neal J Cohen
- Department of Psychology, Beckman Institute, University of Illinois, Champaign, IL, United States
| |
Collapse
|
45
|
Bruffaerts R, Schaeverbeke J, De Weer AS, Nelissen N, Dries E, Van Bouwel K, Sieben A, Bergmans B, Swinnen C, Pijnenburg Y, Sunaert S, Vandenbulcke M, Vandenberghe R. Multivariate analysis reveals anatomical correlates of naming errors in primary progressive aphasia. Neurobiol Aging 2019; 88:71-82. [PMID: 31955981 DOI: 10.1016/j.neurobiolaging.2019.12.016] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 12/10/2019] [Accepted: 12/15/2019] [Indexed: 12/30/2022]
Abstract
Primary progressive aphasia (PPA) is an overarching term for a heterogeneous group of neurodegenerative diseases which affect language processing. Impaired picture naming has been linked to atrophy of the anterior temporal lobe in the semantic variant of PPA. Although atrophy of the anterior temporal lobe proposedly impairs picture naming by undermining access to semantic knowledge, picture naming also entails object recognition and lexical retrieval. Using multivariate analysis, we investigated whether cortical atrophy relates to different types of naming errors generated during picture naming in 43 PPA patients (13 semantic, 9 logopenic, 11 nonfluent, and 10 mixed variant). Omissions were associated with atrophy of the anterior temporal lobes. Semantic errors, for example, mistaking a rhinoceros for a hippopotamus, were associated with atrophy of the left mid and posterior fusiform cortex and the posterior middle and inferior temporal gyrus. Semantic errors and atrophy in these regions occurred in each PPA subtype, without major between-subtype differences. We propose that pathological changes to neural mechanisms associated with semantic errors occur across the PPA spectrum.
Collapse
Affiliation(s)
- Rose Bruffaerts
- Laboratory for Cognitive Neurology, Department of Neurosciences, KU Leuven, Leuven, Belgium; Neurology Department, University Hospitals Leuven, Leuven, Belgium.
| | - Jolien Schaeverbeke
- Laboratory for Cognitive Neurology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - An-Sofie De Weer
- Laboratory for Cognitive Neurology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Natalie Nelissen
- Laboratory for Cognitive Neurology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Eva Dries
- Neurology Department, University Hospitals Leuven, Leuven, Belgium
| | - Karen Van Bouwel
- Neurology Department, University Hospitals Leuven, Leuven, Belgium
| | - Anne Sieben
- Neurology Department, University Hospital Ghent, Ghent, Belgium
| | - Bruno Bergmans
- Neurology Department, University Hospital Ghent, Ghent, Belgium; Neurology Department, AZ Sint-Jan Brugge-Oostende AV, Bruges, Belgium
| | | | - Yolande Pijnenburg
- Neurology Department, Alzheimer Center Amsterdam, Amsterdam Neuroscience, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, The Netherlands
| | - Stefan Sunaert
- Radiology Department, University Hospitals Leuven, Leuven, Belgium
| | | | - Rik Vandenberghe
- Laboratory for Cognitive Neurology, Department of Neurosciences, KU Leuven, Leuven, Belgium; Neurology Department, University Hospitals Leuven, Leuven, Belgium
| |
Collapse
|
46
|
Lyu B, Choi HS, Marslen-Wilson WD, Clarke A, Randall B, Tyler LK. Neural dynamics of semantic composition. Proc Natl Acad Sci U S A 2019; 116:21318-21327. [PMID: 31570590 PMCID: PMC6800340 DOI: 10.1073/pnas.1903402116] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Human speech comprehension is remarkable for its immediacy and rapidity. The listener interprets an incrementally delivered auditory input, millisecond by millisecond as it is heard, in terms of complex multilevel representations of relevant linguistic and nonlinguistic knowledge. Central to this process are the neural computations involved in semantic combination, whereby the meanings of words are combined into more complex representations, as in the combination of a verb and its following direct object (DO) noun (e.g., "eat the apple"). These combinatorial processes form the backbone for incremental interpretation, enabling listeners to integrate the meaning of each word as it is heard into their dynamic interpretation of the current utterance. Focusing on the verb-DO noun relationship in simple spoken sentences, we applied multivariate pattern analysis and computational semantic modeling to source-localized electro/magnetoencephalographic data to map out the specific representational constraints that are constructed as each word is heard, and to determine how these constraints guide the interpretation of subsequent words in the utterance. Comparing context-independent semantic models of the DO noun with contextually constrained noun models reflecting the semantic properties of the preceding verb, we found that only the contextually constrained model showed a significant fit to the brain data. Pattern-based measures of directed connectivity across the left hemisphere language network revealed a continuous information flow among temporal, inferior frontal, and inferior parietal regions, underpinning the verb's modification of the DO noun's activated semantics. These results provide a plausible neural substrate for seamless real-time incremental interpretation on the observed millisecond time scales.
Collapse
Affiliation(s)
- Bingjiang Lyu
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, CB2 3EB Cambridge, United Kingdom
| | - Hun S Choi
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, CB2 3EB Cambridge, United Kingdom
| | - William D Marslen-Wilson
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, CB2 3EB Cambridge, United Kingdom
| | - Alex Clarke
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, CB2 3EB Cambridge, United Kingdom
| | - Billi Randall
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, CB2 3EB Cambridge, United Kingdom
| | - Lorraine K Tyler
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, CB2 3EB Cambridge, United Kingdom
| |
Collapse
|
47
|
Liuzzi AG, Bruffaerts R, Vandenberghe R. The medial temporal written word processing system. Cortex 2019; 119:287-300. [DOI: 10.1016/j.cortex.2019.05.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Revised: 03/14/2019] [Accepted: 05/08/2019] [Indexed: 10/26/2022]
|
48
|
Bruffaerts R, Tyler LK, Shafto M, Tsvetanov KA, Clarke A. Perceptual and conceptual processing of visual objects across the adult lifespan. Sci Rep 2019; 9:13771. [PMID: 31551468 PMCID: PMC6760174 DOI: 10.1038/s41598-019-50254-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Accepted: 09/02/2019] [Indexed: 12/24/2022] Open
Abstract
Making sense of the external world is vital for multiple domains of cognition, and so it is crucial that object recognition is maintained across the lifespan. We investigated age differences in perceptual and conceptual processing of visual objects in a population-derived sample of 85 healthy adults (24-87 years old) by relating measures of object processing to cognition across the lifespan. Magnetoencephalography (MEG) was recorded during a picture naming task to provide a direct measure of neural activity, that is not confounded by age-related vascular changes. Multiple linear regression was used to estimate neural responsivity for each individual, namely the capacity to represent visual or semantic information relating to the pictures. We find that the capacity to represent semantic information is linked to higher naming accuracy, a measure of task-specific performance. In mature adults, the capacity to represent semantic information also correlated with higher levels of fluid intelligence, reflecting domain-general performance. In contrast, the latency of visual processing did not relate to measures of cognition. These results indicate that neural responsivity measures relate to naming accuracy and fluid intelligence. We propose that maintaining neural responsivity in older age confers benefits in task-related and domain-general cognitive processes, supporting the brain maintenance view of healthy cognitive ageing.
Collapse
Affiliation(s)
- Rose Bruffaerts
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
- Laboratory for Cognitive Neurology, Department of Neurosciences, University of Leuven, 3000, Leuven, Belgium
- Neurology Department, University Hospitals Leuven, 3000, Leuven, Belgium
| | - Lorraine K Tyler
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK.
- Cambridge Centre for Ageing and Neuroscience (Cam-CAN), University of Cambridge and MRC Cognition and Brain Sciences Unit, Cambridge, CB2 7EF, UK.
| | - Meredith Shafto
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| | - Kamen A Tsvetanov
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
- Cambridge Centre for Ageing and Neuroscience (Cam-CAN), University of Cambridge and MRC Cognition and Brain Sciences Unit, Cambridge, CB2 7EF, UK
| | - Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| |
Collapse
|
49
|
McCartney B, Martinez-del-Rincon J, Devereux B, Murphy B. A zero-shot learning approach to the development of brain-computer interfaces for image retrieval. PLoS One 2019; 14:e0214342. [PMID: 31525201 PMCID: PMC6746355 DOI: 10.1371/journal.pone.0214342] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 08/29/2019] [Indexed: 11/18/2022] Open
Abstract
Brain decoding-the process of inferring a person's momentary cognitive state from their brain activity-has enormous potential in the field of human-computer interaction. In this study we propose a zero-shot EEG-to-image brain decoding approach which makes use of state-of-the-art EEG preprocessing and feature selection methods, and which maps EEG activity to biologically inspired computer vision and linguistic models. We apply this approach to solve the problem of identifying viewed images from recorded brain activity in a reliable and scalable way. We demonstrate competitive decoding accuracies across two EEG datasets, using a zero-shot learning framework more applicable to real-world image retrieval than traditional classification techniques.
Collapse
Affiliation(s)
| | | | | | - Brian Murphy
- Queen’s University Belfast, United Kingdom
- BrainWaveBank Ltd. Belfast, United Kingdom
| |
Collapse
|
50
|
Abstract
The perirhinal cortex (PRC) serves as the gateway to the hippocampus for episodic memory formation and plays a part in retrieval through its backward connectivity to various neocortical areas. First, I present the evidence suggesting that PRC neurons encode both experientially acquired object features and their associative relations. Recent studies have revealed circuit mechanisms in the PRC for the retrieval of cue-associated information, and have demonstrated that, in monkeys, PRC neuron-encoded information can be behaviourally read out. These studies, among others, support the theory that the PRC converts visual representations of an object into those of its associated features and initiates backward-propagating, interareal signalling for retrieval of nested associations of object features that, combined, extensionally represent the object meaning. I propose that the PRC works as the ventromedial hub of a 'two-hub model' at an apex of the hierarchy of a distributed memory network and integrates signals encoded in other downstream cortical areas that support diverse aspects of knowledge about an object.
Collapse
|