1
|
McIntire G, Dopkins S. Super-optimality and relative distance coding in location memory. Mem Cognit 2024; 52:1439-1450. [PMID: 38519780 DOI: 10.3758/s13421-024-01553-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2024] [Indexed: 03/25/2024]
Abstract
The prevailing model of landmark integration in location memory is Maximum Likelihood Estimation, which assumes that each landmark implies a target location distribution that is narrower for more reliable landmarks. This model assumes weighted linear combination of landmarks and predicts that, given optimal integration, the reliability with multiple landmarks is the sum of the reliabilities with the individual landmarks. Super-optimality is reliability with multiple landmarks exceeding optimal reliability given the reliability with each landmark alone; this is shown when performance exceeds predicted optimal performance, found by aggregating reliability values with single landmarks. Past studies claiming super-optimality have provided arguably impure measures of performance with single landmarks given that multiple landmarks were presented at study in conditions with a single landmark at test, disrupting encoding specificity and thereby leading to underestimation in predicted optimal performance. This study, unlike those prior studies, only presented a single landmark at study and the same landmark at test in single landmark trials, showing super-optimality conclusively. Given that super-optimal information integration occurs, emergent information, that is, information only available with multiple landmarks, must be used. With the target and landmarks all in a line, as throughout this study, relative distance is the only emergent information available. Use of relative distance was confirmed here by finding that, when both landmarks are left of the target at study, the target is remembered further right of its true location the further left the left landmark is moved from study to test.
Collapse
Affiliation(s)
- Gordon McIntire
- Department of Psychological and Brain Sciences, Cognitive Neuroscience Area, The George Washington University, 2013 H Street, Washington, DC, 20006, USA.
| | - Stephen Dopkins
- Department of Psychological and Brain Sciences, Cognitive Neuroscience Area, The George Washington University, 2013 H Street, Washington, DC, 20006, USA
| |
Collapse
|
2
|
Lisi M, Cavanagh P. Different extrapolation of moving object locations in perception, smooth pursuit, and saccades. J Vis 2024; 24:9. [PMID: 38546586 PMCID: PMC10996402 DOI: 10.1167/jov.24.3.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 02/01/2024] [Indexed: 04/07/2024] Open
Abstract
The ability to accurately perceive and track moving objects is crucial for many everyday activities. In this study, we use a "double-drift stimulus" to explore the processing of visual motion signals that underlie perception, pursuit, and saccade responses to a moving object. Participants were presented with peripheral moving apertures filled with noise that either drifted orthogonally to the aperture's direction or had no net motion. Participants were asked to saccade to and track these targets with their gaze as soon as they appeared and then to report their direction. In the trials with internal motion, the target disappeared at saccade onset so that the first 100 ms of the postsaccadic pursuit response was driven uniquely by peripheral information gathered before saccade onset. This provided independent measures of perceptual, pursuit, and saccadic responses to the double-drift stimulus on a trial-by-trial basis. Our analysis revealed systematic differences between saccadic responses, on one hand, and perceptual and pursuit responses, on the other. These differences are unlikely to be caused by differences in the processing of motion signals because both saccades and pursuits seem to rely on shared target position and velocity information. We conclude that our results are instead due to a difference in how the processing mechanisms underlying perception, pursuit, and saccades combine motor signals with target position. These findings advance our understanding of the mechanisms underlying dissociation in visual processing between perception and eye movements.
Collapse
Affiliation(s)
- Matteo Lisi
- Department of Psychology, Royal Holloway, University of London, London, UK
| | - Patrick Cavanagh
- Department of Psychology, Glendon College, Toronto, Ontario, Canada
- Department Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
3
|
Soballa P, Frings C, Schmalbrock P, Merz S. Multisensory integration reduces landmark distortions for tactile but not visual targets. J Neurophysiol 2023; 130:1403-1413. [PMID: 37910559 DOI: 10.1152/jn.00282.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/24/2023] [Accepted: 10/25/2023] [Indexed: 11/03/2023] Open
Abstract
Target localization is influenced by the presence of additionally presented nontargets, termed landmarks. In both the visual and tactile modality, these landmarks led to systematic distortions of target localizations often resulting in a shift toward the landmark. This shift has been attributed to averaging the spatial memory of both stimuli. Crucially, everyday experiences often rely on multiple modalities, and multisensory research suggests that inputs from different senses are optimally integrated, not averaged, for accurate perception, resulting in more reliable perception of cross-modal compared with uni-modal stimuli. As this could also lead to a reduced influence of the landmark, we wanted to test whether landmark distortions would be reduced when presented in a different modality or whether landmark distortions were unaffected by the modalities presented. In two experiments (each n = 30) tactile or visual targets were paired with tactile or visual landmarks. Experiment 1 showed that targets were less shifted toward landmarks from the different than the same modality, which was more pronounced for tactile than for visual targets. Experiment 2 aimed to replicate this pattern with increased visual uncertainty to rule out that smaller localization shifts of visual targets due to low uncertainty had led to the results. Still, landmark modality influenced localization shifts for tactile but not visual targets. The data pattern for tactile targets is not in line with memory averaging but seems to reflect the effects of multisensory integration, whereas visual targets were less prone to landmark distortions and do not appear to benefit from multisensory integration.NEW & NOTEWORTHY In the present study, we directly tested the predictions of two different accounts, namely, spatial memory averaging and multisensory integration, concerning the degree of landmark distortions of targets across modalities. We showed that landmark distortions were reduced across modalities compared to distortions within modalities, which is in line with multisensory integration. Crucially, this pattern was more pronounced for tactile than for visual targets.
Collapse
Affiliation(s)
- Paula Soballa
- Department of Psychology, University of Trier, Germany
| | | | | | - Simon Merz
- Department of Psychology, University of Trier, Germany
| |
Collapse
|
4
|
Forster PP, Fiehler K, Karimpur H. Egocentric cues influence the allocentric spatial memory of object configurations for memory-guided actions. J Neurophysiol 2023; 130:1142-1149. [PMID: 37791381 DOI: 10.1152/jn.00149.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/05/2023] Open
Abstract
Allocentric and egocentric reference frames are used to code the spatial position of action targets in reference to objects in the environment, i.e., relative to landmarks (allocentric), or the observer (egocentric). Previous research investigated reference frames in isolation, for example, by shifting landmarks relative to the target and asking participants to reach to the remembered target location. Systematic reaching errors were found in the direction of the landmark shift and used as a proxy for allocentric spatial coding. Here, we examined the interaction of both allocentric and egocentric reference frames by shifting the landmarks as well as the observer. We asked participants to encode a three-dimensional configuration of balls and to reproduce this configuration from memory after a short delay followed by a landmark or an observer shift. We also manipulated the number of landmarks to test its effect on the use of allocentric and egocentric reference frames. We found that participants were less accurate when reproducing the configuration of balls after an observer shift, which was reflected in larger configurational errors. In addition, an increase in the number of landmarks led to a stronger reliance on allocentric cues and a weaker contribution of egocentric cues. In sum, our results highlight the important role of egocentric cues for allocentric spatial coding in the context of memory-guided actions.NEW & NOTEWORTHY Objects in our environment are coded relative to each other (allocentrically) and are thought to serve as independent and reliable cues (landmarks) in the context of unreliable egocentric signals. Contrary to this assumption, we demonstrate that egocentric cues alter the allocentric spatial memory, which could reflect recently discovered interactions between allocentric and egocentric neural processing pathways. Furthermore, additional landmarks lead to a higher contribution of allocentric and a lower contribution of egocentric cues.
Collapse
Affiliation(s)
- Pierre-Pascal Forster
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
5
|
Newman PM, Qi Y, Mou W, McNamara TP. Statistically Optimal Cue Integration During Human Spatial Navigation. Psychon Bull Rev 2023; 30:1621-1642. [PMID: 37038031 DOI: 10.3758/s13423-023-02254-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2023] [Indexed: 04/12/2023]
Abstract
In 2007, Cheng and colleagues published their influential review wherein they analyzed the literature on spatial cue interaction during navigation through a Bayesian lens, and concluded that models of optimal cue integration often applied in psychophysical studies could explain cue interaction during navigation. Since then, numerous empirical investigations have been conducted to assess the degree to which human navigators are optimal when integrating multiple spatial cues during a variety of navigation-related tasks. In the current review, we discuss the literature on human cue integration during navigation that has been published since Cheng et al.'s original review. Evidence from most studies demonstrate optimal navigation behavior when humans are presented with multiple spatial cues. However, applications of optimal cue integration models vary in their underlying assumptions (e.g., uninformative priors and decision rules). Furthermore, cue integration behavior depends in part on the nature of the cues being integrated and the navigational task (e.g., homing versus non-home goal localization). We discuss the implications of these models and suggest directions for future research.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Yafei Qi
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Weimin Mou
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
6
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
7
|
Min Park Y, Park J, Young Kim I, Koo Kang J, Pyo Jang D. Interhemispheric Theta Coherence in the Hippocampus for Successful Object-Location Memory in Human Intracranial Encephalography. Neurosci Lett 2022; 786:136769. [DOI: 10.1016/j.neulet.2022.136769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 06/23/2022] [Accepted: 06/30/2022] [Indexed: 10/17/2022]
|
8
|
Karimpur H, Kurz J, Fiehler K. The role of perception and action on the use of allocentric information in a large-scale virtual environment. Exp Brain Res 2020; 238:1813-1826. [PMID: 32500297 PMCID: PMC7438369 DOI: 10.1007/s00221-020-05839-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 05/23/2020] [Indexed: 01/10/2023]
Abstract
In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual (“view where the ball landed while standing still”—Experiment 1) or involved an action (“intercept the ball with the foot just before it lands”—Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball’s landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Johannes Kurz
- NemoLab-Neuromotor Behavior Laboratory, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
9
|
Karimpur H, Eftekharifar S, Troje NF, Fiehler K. Spatial coding for memory-guided reaching in visual and pictorial spaces. J Vis 2020; 20:1. [PMID: 32271893 PMCID: PMC7405696 DOI: 10.1167/jov.20.4.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 11/20/2019] [Indexed: 01/10/2023] Open
Abstract
An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | | | - Nikolaus F. Troje
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
- Centre for Vision Research and Department of Biology, York University, Toronto, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
10
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
11
|
Karimpur H, Morgenstern Y, Fiehler K. Facilitation of allocentric coding by virtue of object-semantics. Sci Rep 2019; 9:6263. [PMID: 31000759 PMCID: PMC6472393 DOI: 10.1038/s41598-019-42735-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 04/05/2019] [Indexed: 11/26/2022] Open
Abstract
In the field of spatial coding it is well established that we mentally represent objects for action not only relative to ourselves, egocentrically, but also relative to other objects (landmarks), allocentrically. Several factors facilitate allocentric coding, for example, when objects are task-relevant or constitute stable and reliable spatial configurations. What is unknown, however, is how object-semantics facilitate the formation of these spatial configurations and thus allocentric coding. Here we demonstrate that (i) we can quantify the semantic similarity of objects and that (ii) semantically similar objects can serve as a cluster of landmarks that are allocentrically coded. Participants arranged a set of objects based on their semantic similarity. These arrangements were then entered into a similarity analysis. Based on the results, we created two semantic classes of objects, natural and man-made, that we used in a virtual reality experiment. Participants were asked to perform memory-guided reaching movements toward the initial position of a target object in a scene while either semantically congruent or incongruent landmarks were shifted. We found that the reaching endpoints systematically deviated in the direction of landmark shift. Importantly, this effect was stronger for shifts of semantically congruent landmarks. Our findings suggest that object-semantics facilitate allocentric coding by creating stable spatial configurations.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University, Giessen, Germany.
| | | | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
| |
Collapse
|
12
|
Aagten-Murphy D, Bays PM. Independent working memory resources for egocentric and allocentric spatial information. PLoS Comput Biol 2019; 15:e1006563. [PMID: 30789899 PMCID: PMC6400418 DOI: 10.1371/journal.pcbi.1006563] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 03/05/2019] [Accepted: 10/15/2018] [Indexed: 12/25/2022] Open
Abstract
Visuospatial working memory enables us to maintain access to visual information for processing even when a stimulus is no longer present, due to occlusion, our own movements, or transience of the stimulus. Here we show that, when localizing remembered stimuli, the precision of spatial recall does not rely solely on memory for individual stimuli, but additionally depends on the relative distances between stimuli and visual landmarks in the surroundings. Across three separate experiments, we consistently observed a spatially selective improvement in the precision of recall for items located near a persistent landmark. While the results did not require that the landmark be visible throughout the memory delay period, it was essential that it was visible both during encoding and response. We present a simple model that can accurately capture human performance by considering relative (allocentric) spatial information as an independent localization estimate which degrades with distance and is optimally integrated with egocentric spatial information. Critically, allocentric information was encoded without cost to egocentric estimation, demonstrating independent storage of the two sources of information. Finally, when egocentric and allocentric estimates were put in conflict, the model successfully predicted the resulting localization errors. We suggest that the relative distance between stimuli represents an additional, independent spatial cue for memory recall. This cue information is likely to be critical for spatial localization in natural settings which contain an abundance of visual landmarks.
Collapse
Affiliation(s)
- David Aagten-Murphy
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- * E-mail:
| | - Paul M. Bays
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
13
|
Grasping occluded targets: investigating the influence of target visibility, allocentric cue presence, and direction of motion on gaze and grasp accuracy. Exp Brain Res 2017; 235:2705-2716. [PMID: 28597294 DOI: 10.1007/s00221-017-5004-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 06/01/2017] [Indexed: 10/19/2022]
Abstract
Participants executed right-handed reach-to-grasp movements toward horizontally translating targets. Visual feedback of the target when reaching, as well as the presence of additional cues placed above and below the target's path, was manipulated. Comparison of average fixations at reach onset and at the time of the grasp suggested that participants accurately extrapolated the occluded target's motion prior to reach onset, but not after the reach had been initiated, resulting in inaccurate grasp placements. Final gaze and grasp positions were more accurate when reaching for leftward moving targets, suggesting individuals use different grasp strategies when reaching for targets traveling away from the reaching hand. Additional cue presence appeared to impair participants' ability to extrapolate the disappeared target's motion, and caused grasps for occluded targets to be less accurate. Novel information is provided about the eye-hand strategies used when reaching for moving targets in unpredictable visual conditions.
Collapse
|
14
|
Klinghammer M, Blohm G, Fiehler K. Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching. Front Neurosci 2017; 11:204. [PMID: 28450826 PMCID: PMC5390010 DOI: 10.3389/fnins.2017.00204] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 03/24/2017] [Indexed: 11/16/2022] Open
Abstract
Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.
Collapse
Affiliation(s)
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's UniversityKingston, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus-Liebig-UniversityGiessen, Germany
| |
Collapse
|
15
|
Klinghammer M, Schütz I, Blohm G, Fiehler K. Allocentric information is used for memory-guided reaching in depth: A virtual reality study. Vision Res 2016; 129:13-24. [PMID: 27789230 DOI: 10.1016/j.visres.2016.10.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Revised: 10/05/2016] [Accepted: 10/07/2016] [Indexed: 10/20/2022]
Abstract
Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues.
Collapse
Affiliation(s)
- Mathias Klinghammer
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| | - Immo Schütz
- TU Chemnitz, Institut für Physik, Reichenhainer Str. 70, 09126 Chemnitz, Germany.
| | - Gunnar Blohm
- Queen's University, Centre for Neuroscience Studies, 18, Stuart Street, Kingston, Ontario K7L 3N6, Canada.
| | - Katja Fiehler
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany.
| |
Collapse
|
16
|
Filimon F. Are All Spatial Reference Frames Egocentric? Reinterpreting Evidence for Allocentric, Object-Centered, or World-Centered Reference Frames. Front Hum Neurosci 2015; 9:648. [PMID: 26696861 PMCID: PMC4673307 DOI: 10.3389/fnhum.2015.00648] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Accepted: 11/16/2015] [Indexed: 12/19/2022] Open
Abstract
The use and neural representation of egocentric spatial reference frames is well-documented. In contrast, whether the brain represents spatial relationships between objects in allocentric, object-centered, or world-centered coordinates is debated. Here, I review behavioral, neuropsychological, neurophysiological (neuronal recording), and neuroimaging evidence for and against allocentric, object-centered, or world-centered spatial reference frames. Based on theoretical considerations, simulations, and empirical findings from spatial navigation, spatial judgments, and goal-directed movements, I suggest that all spatial representations may in fact be dependent on egocentric reference frames.
Collapse
Affiliation(s)
- Flavia Filimon
- Adaptive Behavior and Cognition, Max Planck Institute for Human Development Berlin, Germany ; Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany
| |
Collapse
|