1
|
Markov YA, Võ MLH. Scene consistency enhances state representations of real-world objects. Sci Rep 2025; 15:18581. [PMID: 40425683 DOI: 10.1038/s41598-025-01662-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2024] [Accepted: 05/07/2025] [Indexed: 05/29/2025] Open
Abstract
Previous research has shown that the context in which objects are located significantly influences how efficiently they are categorized. However, less is known about whether scene consistency can also affect the processing of finer object features, such as the state of an object (e.g., the angle of a Swiss army knife or the fill level of a bottle). Therefore, across three experiments, we presented a subset of the JURICS stimulus set, in which each object exists in 20 continuously varying states (e.g., from fully closed to fully open) in scenes that were either contextually consistent or inconsistent. Participants were asked to report the specific state of the object using a continuous report task. Our results showed that scene consistency enhanced the precision of state judgments; that is, participants made significantly larger errors in reporting object states when objects were presented in inconsistent compared to consistent scenes. These findings suggest that scene context exhibits its effect already at the level of fine-grained perceptual processing of objects, affecting not only object categorization but also the accuracy of its perceived features.
Collapse
Affiliation(s)
- Yuri A Markov
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.
| | - Melissa Lê-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
2
|
Matuszewski J, Bola Ł, Collignon O, Marchewka A. Similar Computational Hierarchies for Reading and Speech in the Occipital Cortex of Sighed and Blind: Converging Evidence from fMRI and Chronometric TMS. J Neurosci 2025; 45:e1153242024. [PMID: 40032525 PMCID: PMC12079739 DOI: 10.1523/jneurosci.1153-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 12/17/2024] [Accepted: 12/23/2024] [Indexed: 03/05/2025] Open
Abstract
High-level perception results from interactions between hierarchical brain systems responsive to gradually increasing feature complexities. During reading, the initial evaluation of simple visual features in the early visual cortex (EVC) is followed by orthographic and lexical computations in the ventral occipitotemporal cortex (vOTC). While similar visual regions are engaged in tactile Braille reading in congenitally blind people, it is unclear whether the visual network maintains or reorganizes its hierarchy for reading in this population. Combining fMRI and chronometric transcranial magnetic stimulation (TMS), our study revealed a clear correspondence between sighted and blind individuals (both male and female) on how their occipital cortices functionally supports reading and speech processing. Using fMRI, we first observed that vOTC, but not EVC, showed an enhanced response to lexical vs nonlexical information in both groups and sensory modalities. Using TMS, we further found that, in both groups, the processing of written words and pseudowords was disrupted by the EVC stimulation at both early and late time windows. In contrast, the vOTC stimulation disrupted the processing of these written stimuli only when applied at late time windows, again in both groups. In the speech domain, we observed TMS effects only for meaningful words and only in the blind participants. Overall, our results suggest that, while the responses in the deprived visual areas might extend their functional response to other sensory modalities, the computational gradients between early and higher-order occipital regions are retained, at least for reading.
Collapse
Affiliation(s)
- Jacek Matuszewski
- Crossmodal Perception and Plasticity Lab, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-la-Neuve 1348, Belgium
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw 02-093, Poland
| | - Łukasz Bola
- Institute of Psychology, Polish Academy of Sciences, Warsaw 00-378, Poland
| | - Olivier Collignon
- Crossmodal Perception and Plasticity Lab, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-la-Neuve 1348, Belgium
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne 1011, Switzerland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw 02-093, Poland
| |
Collapse
|
3
|
Yeh LC, Gayet S, Kaiser D, Peelen MV. The neural time course of size constancy in natural scenes. Cortex 2025; 188:1-12. [PMID: 40378531 DOI: 10.1016/j.cortex.2025.04.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2024] [Revised: 03/29/2025] [Accepted: 04/22/2025] [Indexed: 05/19/2025]
Abstract
Accurate real-world size perception relies on size constancy, a mechanism that integrates an object's retinal size with distance information. The neural time course of extracting pictorial distance cues from scenes and integrating them with retinal size information - a process referred to as scene-based size constancy - remains unknown. In two experiments, participants viewed objects with either large or small retinal sizes, presented at near or far distances in outdoor scene photographs, while performing an unrelated one-back task. We applied multivariate pattern analysis (MVPA) to time-resolved EEG data to decode the retinal size of large versus small objects, depending on their distance (near versus far) in the scenes. The objects were either perceptually similar in size (large-near versus small-far) or perceptually dissimilar in size (large-far versus small-near), reflecting size constancy. We found that the retinal size of objects could be decoded from 80 ms after scene onset onwards. Distance information modulated size decoding at least 120 ms later: from 200 ms after scene onset when objects were fixated, and from 280 ms when objects were viewed in the periphery. These findings reveal the neural time course of size constancy based on pictorial distance cues in natural scenes.
Collapse
Affiliation(s)
- Lu-Chun Yeh
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands; Neural Computation Group, Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen, Germany
| | - Surya Gayet
- Helmholtz Institute, Experimental Psychology, Utrecht University, Utrecht, the Netherlands
| | - Daniel Kaiser
- Neural Computation Group, Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen, Germany; Germany Center for Mind, Brain and Behavior (CMBB), Philipps University Marburg, Justus Liebig University Gießen, Technical University Darmstadt, Germany
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
4
|
Leticevscaia O, Brandman T, Peelen MV. Scene context and attention independently facilitate MEG decoding of object category. Vision Res 2024; 224:108484. [PMID: 39260230 DOI: 10.1016/j.visres.2024.108484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 08/25/2024] [Accepted: 09/02/2024] [Indexed: 09/13/2024]
Abstract
Many of the objects we encounter in our everyday environments would be hard to recognize without any expectations about these objects. For example, a distant silhouette may be perceived as a car because we expect objects of that size, positioned on a road, to be cars. Reflecting the influence of such expectations on visual processing, neuroimaging studies have shown that when objects are poorly visible, expectations derived from scene context facilitate the representations of these objects in visual cortex from around 300 ms after scene onset. The current magnetoencephalography (MEG) study tested whether this facilitation occurs independently of attention and task relevance. Participants viewed degraded objects alone or within scene context while they either attended the scenes (attended condition) or the fixation cross (unattended condition), also temporally directing attention away from the scenes. Results showed that at 300 ms after stimulus onset, multivariate classifiers trained to distinguish clearly visible animate vs inanimate objects generalized to distinguish degraded objects in scenes better than degraded objects alone, despite the added clutter of the scene background. Attention also modulated object representations at this latency, with better category decoding in the attended than the unattended condition. The modulatory effects of context and attention were independent of each other. Finally, data from the current study and a previous study were combined (N = 51) to provide a more detailed temporal characterization of contextual facilitation. These results extend previous work by showing that facilitatory scene-object interactions are independent of the specific task performed on the visual input.
Collapse
Affiliation(s)
- Olga Leticevscaia
- University of Reading, Centre for Integrative Neuroscience and Neurodynamics, United Kingdom
| | - Talia Brandman
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
5
|
Wu 吴奕忱 Y, Li 李晟 S. Complexity Matters: Normalization to Prototypical Viewpoint Induces Memory Distortion along the Vertical Axis of Scenes. J Neurosci 2024; 44:e1175232024. [PMID: 38777600 PMCID: PMC11223457 DOI: 10.1523/jneurosci.1175-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 04/24/2024] [Accepted: 05/13/2024] [Indexed: 05/25/2024] Open
Abstract
Scene memory is prone to systematic distortions potentially arising from experience with the external world. Boundary transformation, a well-known memory distortion effect along the near-far axis of the three-dimensional space, represents the observer's erroneous recall of scenes' viewing distance. Researchers argued that normalization to the prototypical viewpoint with the high-probability viewing distance influenced this phenomenon. Herein, we hypothesized that the prototypical viewpoint also exists in the vertical angle of view (AOV) dimension and could cause memory distortion along scenes' vertical axis. Human subjects of both sexes were recruited to test this hypothesis, and two behavioral experiments were conducted, revealing a systematic memory distortion in the vertical AOV in both the forced choice (n = 79) and free adjustment (n = 30) tasks. Furthermore, the regression analysis implied that the complexity information asymmetry in scenes' vertical axis and the independent subjective AOV ratings from a large set of online participants (n = 1,208) could jointly predict AOV biases. Furthermore, in a functional magnetic resonance imaging experiment (n = 24), we demonstrated the involvement of areas in the ventral visual pathway (V3/V4, PPA, and OPA) in AOV bias judgment. Additionally, in a magnetoencephalography experiment (n = 20), we could significantly decode the subjects' AOV bias judgments ∼140 ms after scene onset and the low-level visual complexity information around the similar temporal interval. These findings suggest that AOV bias is driven by the normalization process and associated with the neural activities in the early stage of scene processing.
Collapse
Affiliation(s)
- Yichen Wu 吴奕忱
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- National Key Laboratory of General Artificial Intelligence, Peking University, Beijing 100871, China
| | - Sheng Li 李晟
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- National Key Laboratory of General Artificial Intelligence, Peking University, Beijing 100871, China
| |
Collapse
|
6
|
Gandolfo M, Abassi E, Balgova E, Downing PE, Papeo L, Koldewyn K. Converging evidence that left extrastriate body area supports visual sensitivity to social interactions. Curr Biol 2024; 34:343-351.e5. [PMID: 38181794 DOI: 10.1016/j.cub.2023.12.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/25/2023] [Accepted: 12/05/2023] [Indexed: 01/07/2024]
Abstract
Navigating our complex social world requires processing the interactions we observe. Recent psychophysical and neuroimaging studies provide parallel evidence that the human visual system may be attuned to efficiently perceive dyadic interactions. This work implies, but has not yet demonstrated, that activity in body-selective cortical regions causally supports efficient visual perception of interactions. We adopt a multi-method approach to close this important gap. First, using a large fMRI dataset (n = 92), we found that the left hemisphere extrastriate body area (EBA) responds more to face-to-face than non-facing dyads. Second, we replicated a behavioral marker of visual sensitivity to interactions: categorization of facing dyads is more impaired by inversion than non-facing dyads. Third, in a pre-registered experiment, we used fMRI-guided transcranial magnetic stimulation to show that online stimulation of the left EBA, but not a nearby control region, abolishes this selective inversion effect. Activity in left EBA, thus, causally supports the efficient perception of social interactions.
Collapse
Affiliation(s)
- Marco Gandolfo
- Donders Institute, Radboud University, Nijmegen 6525GD, the Netherlands; Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| | - Etienne Abassi
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Eva Balgova
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK; Department of Psychology, Aberystwyth University, Aberystwyth SY23 3UX, Ceredigion, UK
| | - Paul E Downing
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK
| | - Liuba Papeo
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Kami Koldewyn
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| |
Collapse
|
7
|
Zhou Z, Geng JJ. Learned associations serve as target proxies during difficult but not easy visual search. Cognition 2024; 242:105648. [PMID: 37897882 DOI: 10.1016/j.cognition.2023.105648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/30/2023]
Abstract
The target template contains information in memory that is used to guide attention during visual search and is typically thought of as containing features of the actual target object. However, when targets are hard to find, it is advantageous to use other information in the visual environment that is predictive of the target's location to help guide attention. The purpose of these studies was to test if newly learned associations between face and scene category images lead observers to use scene information as a proxy for the face target. Our results showed that scene information was used as a proxy for the target to guide attention but only when the target face was difficult to discriminate from the distractor face; when the faces were easy to distinguish, attention was no longer guided by the scene unless the scene was presented earlier. The results suggest that attention is flexibly guided by both target features as well as features of objects that are predictive of the target location. The degree to which each contributes to guiding attention depends on the efficiency with which that information can be used to decode the location of the target in the current moment. The results contribute to the view that attentional guidance is highly flexible in its use of information to rapidly locate the target.
Collapse
Affiliation(s)
- Zhiheng Zhou
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA.
| | - Joy J Geng
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA; Department of Psychology, University of California, One Shields Ave, Davis, CA 95616, USA.
| |
Collapse
|
8
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
9
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
10
|
Peelen MV, Downing PE. Testing cognitive theories with multivariate pattern analysis of neuroimaging data. Nat Hum Behav 2023; 7:1430-1441. [PMID: 37591984 PMCID: PMC7616245 DOI: 10.1038/s41562-023-01680-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 07/12/2023] [Indexed: 08/19/2023]
Abstract
Multivariate pattern analysis (MVPA) has emerged as a powerful method for the analysis of functional magnetic resonance imaging, electroencephalography and magnetoencephalography data. The new approaches to experimental design and hypothesis testing afforded by MVPA have made it possible to address theories that describe cognition at the functional level. Here we review a selection of studies that have used MVPA to test cognitive theories from a range of domains, including perception, attention, memory, navigation, emotion, social cognition and motor control. This broad view reveals properties of MVPA that make it suitable for understanding the 'how' of human cognition, such as the ability to test predictions expressed at the item or event level. It also reveals limitations and points to future directions.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.
| | - Paul E Downing
- Cognitive Neuroscience Institute, Department of Psychology, Bangor University, Bangor, UK.
| |
Collapse
|
11
|
Brandman T, Peelen MV. Objects sharpen visual scene representations: evidence from MEG decoding. Cereb Cortex 2023; 33:9524-9531. [PMID: 37365829 PMCID: PMC10431745 DOI: 10.1093/cercor/bhad222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 06/02/2023] [Accepted: 06/03/2023] [Indexed: 06/28/2023] Open
Abstract
Real-world scenes consist of objects, defined by local information, and scene background, defined by global information. Although objects and scenes are processed in separate pathways in visual cortex, their processing interacts. Specifically, previous studies have shown that scene context makes blurry objects look sharper, an effect that can be observed as a sharpening of object representations in visual cortex from around 300 ms after stimulus onset. Here, we use MEG to show that objects can also sharpen scene representations, with the same temporal profile. Photographs of indoor (closed) and outdoor (open) scenes were blurred such that they were difficult to categorize on their own but easily disambiguated by the inclusion of an object. Classifiers were trained to distinguish MEG response patterns to intact indoor and outdoor scenes, presented in an independent run, and tested on degraded scenes in the main experiment. Results revealed better decoding of scenes with objects than scenes alone and objects alone from 300 ms after stimulus onset. This effect was strongest over left posterior sensors. These findings show that the influence of objects on scene representations occurs at similar latencies as the influence of scenes on object representations, in line with a common predictive processing mechanism.
Collapse
Affiliation(s)
- Talia Brandman
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 GD, The Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 GD, The Netherlands
| |
Collapse
|
12
|
Pennock IML, Racey C, Allen EJ, Wu Y, Naselaris T, Kay KN, Franklin A, Bosten JM. Color-biased regions in the ventral visual pathway are food selective. Curr Biol 2023; 33:134-146.e4. [PMID: 36574774 PMCID: PMC9976629 DOI: 10.1016/j.cub.2022.11.063] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 10/15/2022] [Accepted: 11/28/2022] [Indexed: 12/27/2022]
Abstract
Color-biased regions have been found between face- and place-selective areas in the ventral visual pathway. To investigate the function of the color-biased regions in a pathway responsible for object recognition, we analyzed the natural scenes dataset (NSD), a large 7T fMRI dataset from 8 participants who each viewed up to 30,000 trials of images of colored natural scenes over more than 30 scanning sessions. In a whole-brain analysis, we correlated the average color saturation of the images with voxel responses, revealing color-biased regions that diverge into two streams, beginning in V4 and extending medially and laterally relative to the fusiform face area in both hemispheres. We drew regions of interest (ROIs) for the two streams and found that the images for each ROI that evoked the largest responses had certain characteristics: they contained food, circular objects, warmer hues, and had higher color saturation. Further analyses showed that food images were the strongest predictor of activity in these regions, implying the existence of medial and lateral ventral food streams (VFSs). We found that color also contributed independently to voxel responses, suggesting that the medial and lateral VFSs use both color and form to represent food. Our findings illustrate how high-resolution datasets such as the NSD can be used to disentangle the multifaceted contributions of many visual features to the neural representations of natural scenes.
Collapse
Affiliation(s)
- Ian M L Pennock
- School of Psychology, University of Sussex, Falmer BN1 9QH, UK.
| | - Chris Racey
- School of Psychology, University of Sussex, Falmer BN1 9QH, UK
| | - Emily J Allen
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA; Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Yihan Wu
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Thomas Naselaris
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Kendrick N Kay
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Anna Franklin
- School of Psychology, University of Sussex, Falmer BN1 9QH, UK
| | - Jenny M Bosten
- School of Psychology, University of Sussex, Falmer BN1 9QH, UK.
| |
Collapse
|
13
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
14
|
Chen L, Cichy RM, Kaiser D. Semantic Scene-Object Consistency Modulates N300/400 EEG Components, but Does Not Automatically Facilitate Object Representations. Cereb Cortex 2022; 32:3553-3567. [PMID: 34891169 DOI: 10.1093/cercor/bhab433] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/02/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
During natural vision, objects rarely appear in isolation, but often within a semantically related scene context. Previous studies reported that semantic consistency between objects and scenes facilitates object perception and that scene-object consistency is reflected in changes in the N300 and N400 components in EEG recordings. Here, we investigate whether these N300/400 differences are indicative of changes in the cortical representation of objects. In two experiments, we recorded EEG signals, while participants viewed semantically consistent or inconsistent objects within a scene; in Experiment 1, these objects were task-irrelevant, while in Experiment 2, they were directly relevant for behavior. In both experiments, we found reliable and comparable N300/400 differences between consistent and inconsistent scene-object combinations. To probe the quality of object representations, we performed multivariate classification analyses, in which we decoded the category of the objects contained in the scene. In Experiment 1, in which the objects were not task-relevant, object category could be decoded from ~100 ms after the object presentation, but no difference in decoding performance was found between consistent and inconsistent objects. In contrast, when the objects were task-relevant in Experiment 2, we found enhanced decoding of semantically consistent, compared with semantically inconsistent, objects. These results show that differences in N300/400 components related to scene-object consistency do not index changes in cortical object representations but rather reflect a generic marker of semantic violations. Furthermore, our findings suggest that facilitatory effects between objects and scenes are task-dependent rather than automatic.
Collapse
Affiliation(s)
- Lixiang Chen
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Radoslaw Martin Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Gießen 35392, Germany.,Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg 35032, Germany
| |
Collapse
|