1
|
Ramey MM, Henderson JM, Yonelinas AP. Episodic memory and semantic knowledge interact to guide eye movements during visual search in scenes: Distinct effects of conscious and unconscious memory. Psychon Bull Rev 2025:10.3758/s13423-025-02686-6. [PMID: 40399748 DOI: 10.3758/s13423-025-02686-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/25/2025] [Indexed: 05/23/2025]
Abstract
Episodic memory and semantic knowledge can each exert strong influences on visual attention when we search through real-world scenes. However, there is debate surrounding how they interact when both are present; specifically, results conflict as to whether memory consistently improves visual search when semantic knowledge is available to guide search. These conflicting results could be driven by distinct effects of different types of episodic memory, but this possibility has not been examined. To test this, we tracked participants' eyes while they searched for objects in semantically congruent and incongruent locations within scenes during a study and test phase. In the test phase containing studied and new scenes, participants gave confidence-based recognition memory judgments that indexed different types of episodic memory (i.e., recollection, familiarity, unconscious memory) for the background scenes, then they searched for the target. We found that semantic knowledge consistently influenced both early and late eye movements, but the influence of memory depended on the type of memory involved. Recollection improved first saccade accuracy in terms of heading towards the target in both congruent and incongruent scenes. In contrast, unconscious memory gradually improved scanpath efficiency over the course of search, but only when semantic knowledge was relatively ineffective (i.e., incongruent scenes). Together, these findings indicate that episodic memory and semantic knowledge are rationally integrated to optimize attentional guidance, such that the most precise or effective forms of information available - which depends on the type of episodic memory available - are prioritized.
Collapse
Affiliation(s)
- Michelle M Ramey
- Department of Psychological Science, University of Arkansas, Fayetteville, AR, USA.
| | - John M Henderson
- Department of Psychology, University of California, Davis, CA, USA
- Center for Mind and Brain, University of California, Davis, CA, USA
| | - Andrew P Yonelinas
- Department of Psychology, University of California, Davis, CA, USA
- Center for Neuroscience, University of California, Davis, CA, USA
| |
Collapse
|
2
|
Krzyś KJ, Avitzur C, Williams CC, Castelhano MS. Object spatial certainty as a measure of spatial variability and its influence on attention. Sci Rep 2025; 15:11263. [PMID: 40175457 PMCID: PMC11965496 DOI: 10.1038/s41598-025-93265-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 03/05/2025] [Indexed: 04/04/2025] Open
Abstract
Some objects have specific places where you can expect them to be found (e.g., toothbrush), while others vary widely (e.g., cat). Previous studies have pointed to the importance of the spatial associations between objects and scenes in informing search strategies. However, the assumptions about objects having a specific location that they are typically found does not take into account the variability inherent in the spatial associations of objects. In the current study, we proposed a new way of measuring this variability and investigated its effects on attention and visual search. First, we developed the Object Spatial Certainty Index by having participants rate where 150 objects were expected to be found in scenes; the index provides a relative measure that ranks these objects from the most spatially predictable (almost always found in one region of the scene, e.g., boots) to the least spatially predictable (equally likely to be in every region of the scene, e.g., plant). In two experiments, we examined how these variations affected search by manipulating whether the targets were either High Certainty or Low Certainty. Our findings demonstrate that the variability of spatial association of objects significantly affected how effectively scene context influences search performance.
Collapse
Affiliation(s)
- Karolina J Krzyś
- Department of Psychology, Queen's University, 62 Arch St., Kingston, ON, K7L 3N6, Canada
| | - Carmel Avitzur
- Department of Psychology, Queen's University, 62 Arch St., Kingston, ON, K7L 3N6, Canada
| | | | - Monica S Castelhano
- Department of Psychology, Queen's University, 62 Arch St., Kingston, ON, K7L 3N6, Canada.
| |
Collapse
|
3
|
Shakerian F, Kushki R, Pashkam MV, Dehaqani MRA, Esteky H. Heterogeneity in Category Recognition across the Visual Field. eNeuro 2025; 12:ENEURO.0331-24.2024. [PMID: 39788731 PMCID: PMC11772044 DOI: 10.1523/eneuro.0331-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 12/01/2024] [Accepted: 12/02/2024] [Indexed: 01/12/2025] Open
Abstract
Visual information emerging from the extrafoveal locations is important for visual search, saccadic eye movement control, and spatial attention allocation. Our everyday sensory experience with visual object categories varies across different parts of the visual field which may result in location-contingent variations in visual object recognition. We used a body, animal body, and chair two-forced choice object category recognition task to investigate this possibility. Animal body and chair images with various levels of visual ambiguity were presented at the fovea and different extrafoveal locations across the vertical and horizontal meridians. We found heterogeneous body and chair category recognition across the visual field. Specifically, while the recognition performance of the body and chair presented at the fovea were similar, it varied across different extrafoveal locations. The largest difference was observed when the body and chair images were presented at the lower-left and upper-right visual fields, respectively. The lower/upper visual field bias of the body/chair recognition was particularly observed in low/high stimulus visual signals. Finally, when subjects' performances were adjusted for a potential location-contingent decision bias in category recognition by subtracting the category detection in full noise condition, location-dependent category recognition was observed only for the body category. These results suggest heterogeneous body recognition bias across the visual field potentially due to more frequent exposure of the lower visual field to body stimuli.
Collapse
Affiliation(s)
- Farideh Shakerian
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran 1956836613, Iran
- Department of Brain and Cognitive Sciences, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran 141554364, Iran
- Pasargad Institute for Advanced Innovative Solutions (PIAIS), Tehran 1991633357, Iran
| | - Roxana Kushki
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran 1956836613, Iran
| | - Maryam Vaziri Pashkam
- Movement and Visual Perception Lab, Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware 19711
| | - Mohammad-Reza A Dehaqani
- Department of Brain and Cognitive Sciences, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran 141554364, Iran
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran 1439957131, Iran
| | - Hossein Esteky
- Pasargad Institute for Advanced Innovative Solutions (PIAIS), Tehran 1991633357, Iran
- Research Group for Brain and Cognitive Science, Shahid Beheshti Medical University, Tehran 1983969411, Iran
| |
Collapse
|
4
|
Fakche C, Hickey C, Jensen O. Fast Feature- and Category-Related Parafoveal Previewing Support Free Visual Exploration. J Neurosci 2024; 44:e0841242024. [PMID: 39455256 PMCID: PMC11622175 DOI: 10.1523/jneurosci.0841-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 10/17/2024] [Accepted: 10/19/2024] [Indexed: 10/28/2024] Open
Abstract
While humans typically saccade every ∼250 ms in natural settings, studies on vision tend to prevent or restrict eye movements. As it takes ∼50 ms to initiate and execute a saccade, this leaves only ∼200 ms to identify the fixated object and select the next saccade goal. How much detail can be derived about parafoveal objects in this short time interval, during which foveal processing and saccade planning both occur? Here, we had male and female human participants freely explore a set of natural images while we recorded magnetoencephalography and eye movements. Using multivariate pattern analysis, we demonstrate that future parafoveal images could be decoded at the feature and category level with peak decoding at ∼110 and ∼165 ms, respectively, while the decoding of fixated objects at the feature and category level peaked at ∼100 and ∼145 ms. The decoding of features and categories was contingent on the objects being saccade goals. In sum, we provide insight on the neuronal mechanism of presaccadic attention by demonstrating that feature- and category-specific information of foveal and parafoveal objects can be extracted in succession within a ∼200 ms intersaccadic interval. These findings rule out strict serial or parallel processing accounts but are consistent with a pipeline mechanism in which foveal and parafoveal objects are processed in parallel but at different levels in the visual hierarchy.
Collapse
Affiliation(s)
- Camille Fakche
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Clayton Hickey
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Ole Jensen
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, United Kingdom
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, United Kingdom
| |
Collapse
|
5
|
Ziemba CM, Goris RLT, Stine GM, Perez RK, Simoncelli EP, Movshon JA. Neuronal and Behavioral Responses to Naturalistic Texture Images in Macaque Monkeys. J Neurosci 2024; 44:e0349242024. [PMID: 39197942 PMCID: PMC11484546 DOI: 10.1523/jneurosci.0349-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 06/19/2024] [Accepted: 08/10/2024] [Indexed: 09/01/2024] Open
Abstract
The visual world is richly adorned with texture, which can serve to delineate important elements of natural scenes. In anesthetized macaque monkeys, selectivity for the statistical features of natural texture is weak in V1, but substantial in V2, suggesting that neuronal activity in V2 might directly support texture perception. To test this, we investigated the relation between single cell activity in macaque V1 and V2 and simultaneously measured behavioral judgments of texture. We generated stimuli along a continuum between naturalistic texture and phase-randomized noise and trained two macaque monkeys to judge whether a sample texture more closely resembled one or the other extreme. Analysis of responses revealed that individual V1 and V2 neurons carried much less information about texture naturalness than behavioral reports. However, the sensitivity of V2 neurons, especially those preferring naturalistic textures, was significantly closer to that of behavior compared with V1. The firing of both V1 and V2 neurons predicted perceptual choices in response to repeated presentations of the same ambiguous stimulus in one monkey, despite low individual neural sensitivity. However, neither population predicted choice in the second monkey. We conclude that neural responses supporting texture perception likely continue to develop downstream of V2. Further, combined with neural data recorded while the same two monkeys performed an orientation discrimination task, our results demonstrate that choice-correlated neural activity in early sensory cortex is unstable across observers and tasks, untethered from neuronal sensitivity, and therefore unlikely to directly reflect the formation of perceptual decisions.
Collapse
Affiliation(s)
- Corey M Ziemba
- Center for Neural Science, New York University, New York, NY
| | - Robbe L T Goris
- Center for Neural Science, New York University, New York, NY
| | - Gabriel M Stine
- Center for Neural Science, New York University, New York, NY
| | - Richard K Perez
- Center for Neural Science, New York University, New York, NY
| | - Eero P Simoncelli
- Center for Neural Science, New York University, New York, NY
- Center for Computational Neuroscience, Flatiron Institute, New York, NY
| | | |
Collapse
|
6
|
He J, Skerswetat J, Bex PJ. Novel color vision assessment tool: AIM Color Detection and Discrimination. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.26.615300. [PMID: 39386421 PMCID: PMC11463461 DOI: 10.1101/2024.09.26.615300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
Color vision assessment is essential in clinical practice, yet different tests exhibit distinct strengths and limitations. Here we apply a psychophysical paradigm, Angular Indication Measurement (AIM) for color detection and discrimination. AIM is designed to address some of the shortcomings of existing tests, such as prolonged testing time, limited accuracy and sensitivity, and the necessity for clinician oversight. AIM presents adaptively generated charts, each a N×M (here 4×4) grid of stimuli, and participants are instructed to indicate either the orientation of the gap in a cone-isolating Landolt C optotype or the orientation of the edge between two colors in an equiluminant color space. The contrasts or color differences of the stimuli are adaptively selected for each chart based on performance of prior AIM charts. In a group of 23 color-normal and 15 people with color vision deficiency (CVD), we validate AIM color against Hardy-Rand-Rittler (HRR), Farnsworth-Munsell 100 hue test (FM100), and anomaloscope color matching diagnosis and use machine learning techniques to classify the type and severity of CVD. The results show that AIM has classification accuracies comparable to that of the anomaloscope, and while HRR and FM100 are less accurate than AIM and an anomaloscope, HRR is very rapid. We conclude that AIM is a computer-based, self-administered, response-adaptive and rapid tool with high test-retest repeatability that has the potential to be suitable for both clinical and research applications.
Collapse
Affiliation(s)
- Jingyi He
- Department of Psychology, Northeastern University, USA
- Herbert Wertheim School of Optometry and Vision Science, University of California Berkeley, Berkeley, USA
| | - Jan Skerswetat
- Department of Psychology, Northeastern University, USA
- Department of Ophthalmology, University of California Irvine, Irvine, USA
| | - Peter J. Bex
- Department of Psychology, Northeastern University, USA
| |
Collapse
|
7
|
Leemans M, Damiano C, Wagemans J. Finding the meaning in meaning maps: Quantifying the roles of semantic and non-semantic scene information in guiding visual attention. Cognition 2024; 247:105788. [PMID: 38579638 DOI: 10.1016/j.cognition.2024.105788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/16/2024] [Accepted: 03/30/2024] [Indexed: 04/07/2024]
Abstract
In real-world vision, people prioritise the most informative scene regions via eye-movements. According to the cognitive guidance theory of visual attention, viewers allocate visual attention to those parts of the scene that are expected to be the most informative. The expected information of a scene region is coded in the semantic distribution of that scene. Meaning maps have been proposed to capture the spatial distribution of local scene semantics in order to test cognitive guidance theories of attention. Notwithstanding the success of meaning maps, the reason for their success has been contested. This has led to at least two possible explanations for the success of meaning maps in predicting visual attention. On the one hand, meaning maps might measure scene semantics. On the other hand, meaning maps might measure scene features, overlapping with, but distinct from, scene semantics. This study aims to disentangle these two sources of information by considering both conceptual information and non-semantic scene entropy simultaneously. We found that both semantic and non-semantic information is captured by meaning maps, but scene entropy accounted for more unique variance in the success of meaning maps than conceptual information. Additionally, some explained variance was unaccounted for by either source of information. Thus, although meaning maps may index some aspect of semantic information, their success seems to be better explained by non-semantic information. We conclude that meaning maps may not yet be a good tool to test cognitive guidance theories of attention in general, since they capture non-semantic aspects of local semantic density and only a small portion of conceptual information. Rather, we suggest that researchers should better define the exact aspect of cognitive guidance theories they wish to test and then use the tool that best captures that desired semantic information. As it stands, the semantic information contained in meaning maps seems too ambiguous to draw strong conclusions about how and when semantic information guides visual attention.
Collapse
Affiliation(s)
- Maarten Leemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, University of Leuven (KU Leuven), Belgium.
| | - Claudia Damiano
- Laboratory of Experimental Psychology, Department of Brain and Cognition, University of Leuven (KU Leuven), Belgium
| | - Johan Wagemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, University of Leuven (KU Leuven), Belgium
| |
Collapse
|
8
|
Wise T, Emery K, Radulescu A. Naturalistic reinforcement learning. Trends Cogn Sci 2024; 28:144-158. [PMID: 37777463 PMCID: PMC10878983 DOI: 10.1016/j.tics.2023.08.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 08/23/2023] [Accepted: 08/24/2023] [Indexed: 10/02/2023]
Abstract
Humans possess a remarkable ability to make decisions within real-world environments that are expansive, complex, and multidimensional. Human cognitive computational neuroscience has sought to exploit reinforcement learning (RL) as a framework within which to explain human decision-making, often focusing on constrained, artificial experimental tasks. In this article, we review recent efforts that use naturalistic approaches to determine how humans make decisions in complex environments that better approximate the real world, providing a clearer picture of how humans navigate the challenges posed by real-world decisions. These studies purposely embed elements of naturalistic complexity within experimental paradigms, rather than focusing on simplification, generating insights into the processes that likely underpin humans' ability to navigate complex, multidimensional real-world environments so successfully.
Collapse
Affiliation(s)
- Toby Wise
- Department of Neuroimaging, King's College London, London, UK.
| | - Kara Emery
- Center for Data Science, New York University, New York, NY, USA
| | - Angela Radulescu
- Center for Computational Psychiatry, Icahn School of Medicine at Mt. Sinai, New York, NY, USA
| |
Collapse
|
9
|
He J, Bex PJ, Skerswetat J. Rapid measurement and machine learning classification of color vision deficiency. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.06.14.23291402. [PMID: 37398496 PMCID: PMC10312880 DOI: 10.1101/2023.06.14.23291402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Color vision deficiencies (CVDs) indicate potential genetic variations and can be important biomarkers of acquired impairment in many neuro-ophthalmic diseases. However, CVDs are typically measured with insensitive or inefficient tools that are designed to classify dichromacy subtypes rather than track changes in sensitivity. We introduce FInD (Foraging Interactive D-prime), a novel computer-based, generalizable, rapid, self-administered vision assessment tool and applied it to color vision testing. This signal detection theory-based adaptive paradigm computes test stimulus intensity from d-prime analysis. Stimuli were chromatic gaussian blobs in dynamic luminance noise, and participants clicked on cells that contain chromatic blobs (detection) or blob pairs of differing colors (discrimination). Sensitivity and repeatability of FInD Color tasks were compared against HRR, FM100 hue tests in 19 color-normal and 18 color-atypical, age-matched observers. Rayleigh color match was completed as well. Detection and Discrimination thresholds were higher for atypical observers than for typical observers, with selective threshold elevations corresponding to unique CVD types. Classifications of CVD type and severity via unsupervised machine learning confirmed functional subtypes. FInD tasks reliably detect CVD and may serve as valuable tools in basic and clinical color vision science.
Collapse
Affiliation(s)
- Jingyi He
- Department of Psychology, Northeastern University, USA
| | - Peter J. Bex
- Department of Psychology, Northeastern University, USA
| | | |
Collapse
|
10
|
Peacock CE, Singh P, Hayes TR, Rehrig G, Henderson JM. Searching for meaning: Local scene semantics guide attention during natural visual search in scenes. Q J Exp Psychol (Hove) 2023; 76:632-648. [PMID: 35510885 PMCID: PMC11132926 DOI: 10.1177/17470218221101334] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Praveena Singh
- Center for Neuroscience, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Gwendolyn Rehrig
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|