1
|
Prabhakar AT, Ninan GA, Roy A, Kumar S, Margabandhu K, Priyadarshini Michael J, Bal D, Mannam P, McKendrick AM, Carter O, Garrido MI. Self-motion induced environmental kinetopsia and pop-out illusion - Insight from a single case phenomenology. Neuropsychologia 2024; 196:108820. [PMID: 38336207 DOI: 10.1016/j.neuropsychologia.2024.108820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 01/07/2024] [Accepted: 02/06/2024] [Indexed: 02/12/2024]
Abstract
Stable visual perception, while we are moving, depends on complex interactions between multiple brain regions. We report a patient with damage to the right occipital and temporal lobes who presented with a visual disturbance of inward movement of roadside buildings towards the centre of his visual field, that occurred only when he moved forward on his motorbike. We describe this phenomenon as "self-motion induced environmental kinetopsia". Additionally, he was identified to have another illusion, in which objects displayed on the screen, appeared to pop out of the background. Here, we describe the clinical phenomena and the behavioural tasks specifically designed to document and measure this altered visual experience. Using the methods of lesion mapping and lesion network mapping we were able to demonstrate disrupted functional connectivity in the areas that process flow-parsing such as V3A and V6 that may underpin self-motion induced environmental kinetopsia. Moreover, we suggest that altered connectivity to the regions that process environmental frames of reference such as retrosplenial cortex (RSC) might explain the pop-out illusion. Our case adds novel and convergent lesion-based evidence to the role of these brain regions in visual processing.
Collapse
Affiliation(s)
- Appawamy Thirumal Prabhakar
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India; Department of Neurological Sciences, Christian Medical College, Vellore, India; Melbourne School of Psychological Sciences, University of Melbourne, Vic, Australia.
| | - George Abraham Ninan
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India
| | - Anupama Roy
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India; Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Sharath Kumar
- Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Kavitha Margabandhu
- Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Jessica Priyadarshini Michael
- Cognitive neuroscience and Clinical Phenomenology Lab, Christian Medical College, Vellore, India; Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Deepti Bal
- Department of Neurological Sciences, Christian Medical College, Vellore, India
| | - Pavithra Mannam
- Department of Radiology, Christian Medical College, Vellore, India
| | - Allison M McKendrick
- Division of Optometry, School of Allied Health, University of Western Australia, Lions Eye Institute, Perth, Australia
| | - Olivia Carter
- Melbourne School of Psychological Sciences, University of Melbourne, Vic, Australia
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, University of Melbourne, Vic, Australia; Graeme Clark Institute for Biomedical Engineering, University of Melbourne, Vic, Australia
| |
Collapse
|
2
|
The thickness of the ventral medial prefrontal cortex predicts the prior-entry effect for allocentric representation in near space. Sci Rep 2022; 12:5704. [PMID: 35383294 PMCID: PMC8983760 DOI: 10.1038/s41598-022-09837-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 03/28/2022] [Indexed: 11/29/2022] Open
Abstract
Neuropsychological studies have demonstrated that the preferential processing of near-space and egocentric representation is associated with the self-prioritization effect (SPE). However, relatively little is known concerning whether the SPE is superior to the representation of egocentric frames or near-space processing in the interaction between spatial reference frames and spatial domains. The present study adopted the variant of the shape-label matching task (i.e., color-label) to establish an SPE, combined with a spatial reference frame judgment task, to examine how the SPE leads to preferential processing of near-space or egocentric representations. Surface-based morphometry analysis was also adopted to extract the cortical thickness of the ventral medial prefrontal cortex (vmPFC) to examine whether it could predict differences in the SPE at the behavioral level. The results showed a significant SPE, manifested as the response of self-associated color being faster than that of stranger-associated color. Additionally, the SPE showed a preference for near-space processing, followed by egocentric representation. More importantly, the thickness of the vmPFC could predict the difference in the SPE on reference frames, particularly in the left frontal pole cortex and bilateral rostral anterior cingulate cortex. These findings indicated that the SPE showed a prior entry effect for information at the spatial level relative to the reference frame level, providing evidence to support the structural significance of the self-processing region.
Collapse
|
3
|
Huesmann K, Loffing F, Büsch D, Schorer J, Hagemann N. Varying Degrees of Perception-Action Coupling and Anticipation in Handball Goalkeeping. J Mot Behav 2021; 54:391-400. [PMID: 34663190 DOI: 10.1080/00222895.2021.1984868] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Anticipation in sports is commonly investigated using perception-action uncoupled methods, thus raising questions regarding transferability of findings to the field. The aim of this study was to investigate the influence of different degrees of perception-action coupling on anticipation in handball goalkeeping. Advanced, intermediate and novice handball goalkeepers watched videos of throws on the goal and were asked to anticipate throw direction via key press (perception-action artificial condition) and via natural movement response (perception-action simulated condition). Results reveal overall superior performance in the artificial compared to the simulated condition. Skill-based differences, however, were descriptively more pronounced in the simulated condition compared to the artificial condition. The findings further highlight the importance of more representative research methods to unravel perceptual-cognitive skill in sports.
Collapse
Affiliation(s)
- Kim Huesmann
- Institute of Sport Science, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Florian Loffing
- Institute of Sport Science, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Dirk Büsch
- Institute of Sport Science, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Jörg Schorer
- Institute of Sport Science, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Norbert Hagemann
- Institute of Sports and Sport Science, University of Kassel, Kassel, Germany
| |
Collapse
|
4
|
Prabhakar TA. Pop-out illusion as the initial presentation of posterior cortical atrophy. Neurocase 2021; 27:266-269. [PMID: 34128452 DOI: 10.1080/13554794.2021.1929333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Posterior cortical atrophy is a rare neurodegenerative disease that presents with progressive higher-order visual impairment. We describe a case of posterior cortical atrophy in a 62-year-old, right-handed man, who initially presented with difficulty in viewing the television screen, followed by reading difficulty a few months later, and then developed features of Balint's syndrome over the course of 3 years. We report an illusion, which we noticed at the time of the initial presentation, where the characters in the television appeared to be popping out of the screen. We describe this as the "pop-out illusion".
Collapse
|
5
|
Spatiotemporal Coding in the Macaque Supplementary Eye Fields: Landmark Influence in the Target-to-Gaze Transformation. eNeuro 2021; 8:ENEURO.0446-20.2020. [PMID: 33318073 PMCID: PMC7877461 DOI: 10.1523/eneuro.0446-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022] Open
Abstract
Eye-centered (egocentric) and landmark-centered (allocentric) visual signals influence spatial cognition, navigation, and goal-directed action, but the neural mechanisms that integrate these signals for motor control are poorly understood. A likely candidate for egocentric/allocentric integration in the gaze control system is the supplementary eye fields (SEF), a mediofrontal structure with high-level “executive” functions, spatially tuned visual/motor response fields, and reciprocal projections with the frontal eye fields (FEF). To test this hypothesis, we trained two head-unrestrained monkeys (Macaca mulatta) to saccade toward a remembered visual target in the presence of a visual landmark that shifted during the delay, causing gaze end points to shift partially in the same direction. A total of 256 SEF neurons were recorded, including 68 with spatially tuned response fields. Model fits to the latter established that, like the FEF and superior colliculus (SC), spatially tuned SEF responses primarily showed an egocentric (eye-centered) target-to-gaze position transformation. However, the landmark shift influenced this default egocentric transformation: during the delay, motor neurons (with no visual response) showed a transient but unintegrated shift (i.e., not correlated with the target-to-gaze transformation), whereas during the saccade-related burst visuomotor (VM) neurons showed an integrated shift (i.e., correlated with the target-to-gaze transformation). This differed from our simultaneous FEF recordings (Bharmauria et al., 2020), which showed a transient shift in VM neurons, followed by an integrated response in all motor responses. Based on these findings and past literature, we propose that prefrontal cortex incorporates landmark-centered information into a distributed, eye-centered target-to-gaze transformation through a reciprocal prefrontal circuit.
Collapse
|
6
|
Longo MR, Rajapakse SS, Alsmith AJT, Ferrè ER. Shared contributions of the head and torso to spatial reference frames across spatial judgments. Cognition 2020; 204:104349. [PMID: 32599311 PMCID: PMC7520546 DOI: 10.1016/j.cognition.2020.104349] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 05/28/2020] [Accepted: 06/01/2020] [Indexed: 11/30/2022]
Abstract
Egocentric frames of reference take the body as the point of origin of a spatial coordinate system. Bodies, however, are not points, but extended objects, with distinct parts that can move independently of one another. We recently developed a novel paradigm to probe the use of different body parts in simple spatial judgments, what we called the misalignment paradigm. In this study, we applied the misalignment paradigm in a perspective-taking task to investigate whether the weightings given to different body parts are shared across different spatial judgments involving different spatial axes. Participants saw birds-eye images of a person with their head rotated 45° relative to the torso. On each trial, a ball appeared and participants made judgments either of whether the ball was to the person's left or right, or whether the ball was in front of the person or behind them. By analysing the pattern of responses with respect to both head and torso, we quantified the contribution of each body part to the reference frames underlying each judgment. For both judgment types we found clear contributions of both head and torso, with more weight being given on average to the torso. Individual differences in the use of the two body parts were correlated across judgment types indicating the use of a shared set of weightings used across spatial axes and judgments. Moreover, retesting of participants several months later showed high stability of these weightings, suggesting that they are stable characteristics of people.
Collapse
Affiliation(s)
- Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London, United Kingdom.
| | - Sampath S Rajapakse
- Department of Psychological Sciences, Birkbeck, University of London, United Kingdom
| | | | - Elisa R Ferrè
- Department of Psychology, Royal Holloway, University of London, United Kingdom
| |
Collapse
|
7
|
Bharmauria V, Sajad A, Li J, Yan X, Wang H, Crawford JD. Integration of Eye-Centered and Landmark-Centered Codes in Frontal Eye Field Gaze Responses. Cereb Cortex 2020; 30:4995-5013. [PMID: 32390052 DOI: 10.1093/cercor/bhaa090] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/07/2020] [Accepted: 03/23/2020] [Indexed: 12/19/2022] Open
Abstract
The visual system is thought to separate egocentric and allocentric representations, but behavioral experiments show that these codes are optimally integrated to influence goal-directed movements. To test if frontal cortex participates in this integration, we recorded primate frontal eye field activity during a cue-conflict memory delay saccade task. To dissociate egocentric and allocentric coordinates, we surreptitiously shifted a visual landmark during the delay period, causing saccades to deviate by 37% in the same direction. To assess the cellular mechanisms, we fit neural response fields against an egocentric (eye-centered target-to-gaze) continuum, and an allocentric shift (eye-to-landmark-centered) continuum. Initial visual responses best-fit target position. Motor responses (after the landmark shift) predicted future gaze position but embedded within the motor code was a 29% shift toward allocentric coordinates. This shift appeared transiently in memory-related visuomotor activity, and then reappeared in motor activity before saccades. Notably, fits along the egocentric and allocentric shift continua were initially independent, but became correlated across neurons just before the motor burst. Overall, these results implicate frontal cortex in the integration of egocentric and allocentric visual information for goal-directed action, and demonstrate the cell-specific, temporal progression of signal multiplexing for this process in the gaze system.
Collapse
Affiliation(s)
- Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Amirsaman Sajad
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37240, USA
| | - Jirui Li
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Hongying Wang
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - John Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Departments of Psychology, Biology and Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada M3J 1P3
| |
Collapse
|
8
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
9
|
A pantomiming priming study on the grasp and functional use actions of tools. Exp Brain Res 2019; 237:2155-2165. [PMID: 31203403 DOI: 10.1007/s00221-019-05581-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Accepted: 06/11/2019] [Indexed: 12/31/2022]
Abstract
It has previously been demonstrated that tool recognition is facilitated by the repeated visual presentation of object features affording actions, such as those related to grasping and their functional use. It is unclear, however, if this can also facilitate pantomiming. Participants were presented with an image of a prime followed by a target tool and were required to pantomime the appropriate action for each one. The grasp and functional use attributes of the target tool were either the same or different to the prime. Contrary to expectations, participants were slower at pantomiming the target tool relative to the prime regardless of whether the grasp and function of the tool were the same or different-except when the prime and target tools consisted of identical images of the same exemplar. We also found a decrease in accuracy of performing functional use actions for the target tool relative to the prime when the two differed in functional use but not grasp. We reconcile differences between our findings and those that have performed priming studies on tool recognition with differences in task demands and known differences in how the brain recognises tools and performs actions to make use of them.
Collapse
|
10
|
Hilchey MD, Weidler BJ, Rajsic J, Pratt J. Does changing distractor environments eliminate spatiomotor biases? VISUAL COGNITION 2018. [DOI: 10.1080/13506285.2018.1532939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
| | - Blaire J. Weidler
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Jason Rajsic
- Department of Psychological Sciences, Vanderbilt University, Nashville, TN, USA
| | - Jay Pratt
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
11
|
Abstract
Recent research has expanded the list of factors that control spatial attention. Beside current goals and perceptual salience, statistical learning, reward, motivation and emotion also affect attention. But do these various factors influence spatial attention in the same manner, as suggested by the integrated framework of attention, or do they target different aspects of spatial attention? Here I present evidence that the control of attention may be implemented in two ways. Whereas current goals typically modulate where in space attention is prioritized, search habits affect how one moves attention in space. Using the location probability learning paradigm, I show that a search habit forms when people frequently find a visual search target in one region of space. Attentional cuing by probability learning differs from that by current goals. Probability cuing is implicit and persists long after the probability cue is no longer valid. Whereas explicit goal-driven attention codes space in an environment-centered reference frame, probability cuing is viewer-centered and is insensitive to secondary working memory load and aging. I propose a multi-level framework that separates the source of attentional control from its implementation. Similar to the integrated framework, the multi-level framework considers current goals, perceptual salience, and selection history as major sources of attentional control. However, these factors are implemented in two ways, controlling where spatial attention is allocated and how one shifts attention in space.
Collapse
Affiliation(s)
- Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA.
| |
Collapse
|
12
|
Chen Y, Monaco S, Crawford JD. Neural substrates for allocentric-to-egocentric conversion of remembered reach targets in humans. Eur J Neurosci 2018. [PMID: 29512943 DOI: 10.1111/ejn.13885] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Targets for goal-directed action can be encoded in allocentric coordinates (relative to another visual landmark), but it is not known how these are converted into egocentric commands for action. Here, we investigated this using a slow event-related fMRI paradigm, based on our previous behavioural finding that the allocentric-to-egocentric (Allo-Ego) conversion for reach is performed at the first possible opportunity. Participants were asked to remember (and eventually reach towards) the location of a briefly presented target relative to another visual landmark. After a first memory delay, participants were forewarned by a verbal instruction if the landmark would reappear at the same location (potentially allowing them to plan a reach following the auditory cue before the second delay), or at a different location where they had to wait for the final landmark to be presented before response, and then reach towards the remembered target location. As predicted, participants showed landmark-centred directional selectivity in occipital-temporal cortex during the first memory delay, and only developed egocentric directional selectivity in occipital-parietal cortex during the second delay for the 'Same cue' task, and during response for the 'Different cue' task. We then compared cortical activation between these two tasks at the times when the Allo-Ego conversion occurred, and found common activation in right precuneus, right presupplementary area and bilateral dorsal premotor cortex. These results confirm that the brain converts allocentric codes to egocentric plans at the first possible opportunity, and identify the four most likely candidate sites specific to the Allo-Ego transformation for reaches.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada
| | - Simona Monaco
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - J Douglas Crawford
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada.,Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| |
Collapse
|
13
|
Body position and motor imagery strategy effects on imagining gait in healthy adults: Results from a cross-sectional study. PLoS One 2018; 13:e0191513. [PMID: 29543816 PMCID: PMC5854233 DOI: 10.1371/journal.pone.0191513] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Accepted: 01/05/2018] [Indexed: 12/04/2022] Open
Abstract
Background Assessment of changes in higher levels of gait control with aging is important to better understand age-related gait instability, with the perspective to improve the screening of individuals at risk for falls. The comparison between actual Timed Up and Go test (aTUG) and its imagined version (iTUG) is a simple clinical way to assess age-related changes in gait control. The modulations of iTUG performances by body positions and motor imagery (MI) strategies with normal aging have not been evaluated yet. This study aims 1) to compare the aTUG time with the iTUG time under different body positions (i.e., sitting, standing or supine) in healthy young and middle age, and older adults, and 2) to examine the associations of body positions and MI strategies (i.e., egocentric versus allocentric) with the time needed to complete the iTUG and the delta TUG time (i.e., relative difference between aTUG and iTUG) while taking into consideration clinical characteristics of participants. Methods A total of 60 healthy individuals (30 young and middle age participants 26.6±7.4 years, and 30 old participants 75.0±4.4 years) were recruited in this cross-sectional study. The iTUG was performed while sitting, standing and in supine position. Times of the aTUG, the iTUG under the three body positions, the TUG delta time and the strategies of MI (i.e., ego representation, defined as representation of the location of objects in space relative to the body axes of the self, versus allocentric representation defined as encoding information about body movement with respect to other object, the location of body being defined relative to the location of other objects) were used as outcomes. Age, sex, height, weight, number of drugs taken daily, level of physical activity and prevalence of closed eyes while performing iTUG were recorded. Results The aTUG time is significantly greater than iTUG while sitting and standing (P<0.001), except when older participants are standing. A significant difference is reported between iTUG while sitting or standing and iTUG while supine (P≤0.002), higher time being reported in supine position. The multiple linear regressions confirm that the supine position is associated with significant increased iTUG (P≤0.04) and decreased TUG delta time (P≤0.010), regardless of the adjustment. Older participants use the allocentric MI while imagining TUG more frequently than young and middle age participants, regardless of body positions (P≤0.001). Allocentric MI strategy is associated with a significant decrease in iTUG (P = 0.037) only while adjusting for age. A significant increase of iTUG time is associated with age (P≤0.026). Conclusions Supine position while imagining TUG represents a more accurate position of actual performance of TUG. Age has a limited effect on iTUG performance but is associated with a change in MI from ego to allocentric representation that decreases the iTUG performances, and thus increases the discrepancy with aTUG.
Collapse
|
14
|
Why Do Irrelevant Alternatives Matter? An fMRI-TMS Study of Context-Dependent Preferences. J Neurosci 2017; 37:11647-11661. [PMID: 29109242 DOI: 10.1523/jneurosci.2307-16.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Revised: 09/14/2017] [Indexed: 11/21/2022] Open
Abstract
Both humans and animals are known to exhibit a violation of rationality known as "decoy effect": introducing an irrelevant option (a decoy) can influence choices among other (relevant) options. Exactly how and why decoys trigger this effect is not known. It may be an example of fast heuristic decision-making, which is adaptive in natural environments, but may lead to biased choices in certain markets or experiments. We used fMRI and transcranial magnetic stimulation to investigate the neural underpinning of the decoy effect of both sexes. The left ventral striatum was more active when the chosen option dominated the decoy. This is consistent with the hypothesis that the presence of a decoy option influences the valuation of other options, making valuation context-dependent even when choices appear fully rational. Consistent with the idea that control is recruited to prevent heuristics from producing biased choices, the right inferior frontal gyrus, often implicated in inhibiting prepotent responses, connected more strongly with the striatum when subjects successfully overrode the decoy effect and made unbiased choices. This is further supported by our transcranial magnetic stimulation experiment: subjects whose right inferior frontal gyrus was temporarily disrupted made biased choices more often than a control group. Our results suggest that the neural basis of the decoy effect could be the context-dependent activation of the valuation area. But the differential connectivity from the frontal area may indicate how deliberate control monitors and corrects errors and biases in decision-making.SIGNIFICANCE STATEMENT Standard theories of rational decision-making assume context-independent valuations of available options. Motivated by the importance of this basic assumption, we used fMRI to study how the human brain assigns values to available options. We found activity in the valuation area to be consistent with the hypothesis that values depend on irrelevant aspects of the environment, even for subjects whose choices appear fully rational. Such context-dependent valuations may lead to biased decision-making. We further found differential connectivity from the frontal area to the valuation area depending on whether biases were successfully overcome. This suggests a mechanism for making rational choices despite the potential bias. Further support was obtained by a transcranial magnetic stimulation experiment, where subjects whose frontal control was temporarily disrupted made biased choices more often than a control group.
Collapse
|
15
|
Gaze-centered coding of proprioceptive reach targets after effector movement: Testing the impact of online information, time of movement, and target distance. PLoS One 2017; 12:e0180782. [PMID: 28678886 PMCID: PMC5498052 DOI: 10.1371/journal.pone.0180782] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 06/21/2017] [Indexed: 11/19/2022] Open
Abstract
In previous research, we demonstrated that spatial coding of proprioceptive reach targets depends on the presence of an effector movement (Mueller & Fiehler, Neuropsychologia, 2014, 2016). In these studies, participants were asked to reach in darkness with their right hand to a proprioceptive target (tactile stimulation on the finger tip) while their gaze was varied. They either moved their left, stimulated hand towards a target location or kept it stationary at this location where they received a touch on the fingertip to which they reached with their right hand. When the stimulated hand was moved, reach errors varied as a function of gaze relative to target whereas reach errors were independent of gaze when the hand was kept stationary. The present study further examines whether (a) the availability of proprioceptive online information, i.e. reaching to an online versus a remembered target, (b) the time of the effector movement, i.e. before or after target presentation, or (c) the target distance from the body influences gaze-centered coding of proprioceptive reach targets. We found gaze-dependent reach errors in the conditions which included a movement of the stimulated hand irrespective of whether proprioceptive information was available online or remembered. This suggests that an effector movement leads to gaze-centered coding for both online and remembered proprioceptive reach targets. Moreover, moving the stimulated hand before or after target presentation did not affect gaze-dependent reach errors, thus, indicating a continuous spatial update of positional signals of the stimulated hand rather than the target location per se. However, reaching to a location close to the body rather than farther away (but still within reachable space) generally decreased the influence of a gaze-centered reference frame.
Collapse
|
16
|
Abstract
Linkenauger, Witt, and Proffitt (Journal of Experimental Psychology: Human Perception and Performance, 37(5), 1432–1441, 2011, Experiment 2) reported that right-handers estimated objects as smaller if they intended to grasp them in their right rather than their left hand. Based on the action-specific account, they argued that this scaling effect occurred because participants believed their right hand could grasp larger objects. However, Collier and Lawson (Journal of Experimental Psychology: Human Perception and Performance, 43(4), 749–769, 2017) failed to replicate this effect. Here, we investigated whether this discrepancy in results arose from demand characteristics. We investigated two forms of demand characteristics: altering responses following conscious hypothesis guessing (Experiments 1 and 2), and subtle influences of the experimental context (Experiment 3). We found no scaling effects when participants were given instructions which implied the expected outcome of the experiment (Experiment 1), but they were obtained when we used unrealistically explicit instructions which gave the exact prediction made by the action-specific account (Experiment 2). Scaling effects were also found using a context in which grasping capacity could seem relevant for size estimation (by asking participants about the perceived graspability of an object immediately before asking about its size on every trial, as was done in Linkenauger et al., 2011; Experiment 2). These results suggest that demand characteristics due to context effects could explain the scaling effects reported in Experiment 2 of Linkenauger et al. (2011), rather than either hypothesis guessing, or, as proposed by the action-specific account, a change in the perceived size of objects.
Collapse
|
17
|
Online adjustments of leg movements in healthy young and old. Exp Brain Res 2017; 235:2329-2348. [DOI: 10.1007/s00221-017-4967-7] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Accepted: 04/24/2017] [Indexed: 12/22/2022]
|
18
|
Stuart S, Lord S, Hill E, Rochester L. Gait in Parkinson's disease: A visuo-cognitive challenge. Neurosci Biobehav Rev 2016; 62:76-88. [PMID: 26773722 DOI: 10.1016/j.neubiorev.2016.01.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2015] [Revised: 12/15/2015] [Accepted: 01/05/2016] [Indexed: 12/18/2022]
Abstract
Vision and cognition have both been related to gait impairment in Parkinson's disease (PD) through separate strands of research. The cumulative and interactive effect of both (which we term visuo-cognition) has not been previously investigated and little is known about the influence of cognition on vision with respect to gait. Understanding the role of vision, cognition and visuo-cognition in gait in PD is critical for data interpretation and to infer and test underlying mechanisms. The purpose of this comprehensive narrative review was to examine the interdependent and interactive role of cognition and vision in gait in PD and older adults. Evidence from a broad range of research disciplines was reviewed and summarised. A key finding was that attention appears to play a pivotal role in mediating gait, cognition and vision, and should be considered emphatically in future research in this field.
Collapse
Affiliation(s)
- Samuel Stuart
- Institute of Neuroscience/Newcastle University Institute of Ageing, Clinical Ageing Research Unit, Campus for Ageing and Vitality Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Sue Lord
- Institute of Neuroscience/Newcastle University Institute of Ageing, Clinical Ageing Research Unit, Campus for Ageing and Vitality Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Elizabeth Hill
- Institute of Neuroscience/Newcastle University Institute of Ageing, Clinical Ageing Research Unit, Campus for Ageing and Vitality Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Lynn Rochester
- Institute of Neuroscience/Newcastle University Institute of Ageing, Clinical Ageing Research Unit, Campus for Ageing and Vitality Newcastle University, Newcastle upon Tyne, United Kingdom.
| |
Collapse
|
19
|
Jiang YV, Won BY. Spatial scale, rather than nature of task or locomotion, modulates the spatial reference frame of attention. J Exp Psychol Hum Percept Perform 2015; 41:866-78. [PMID: 25867510 DOI: 10.1037/xhp0000056] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visuospatial attention is strongly biased to locations that had frequently contained a search target before. However, the function of this bias depends on the reference frame in which attended locations are coded. Previous research has shown a striking difference between tasks administered on a computer monitor and those administered in a large environment, with the former inducing viewer-centered learning and the latter environment-centered learning. Why does environment-centered learning fail on a computer? Here, we tested 3 possibilities: differences in spatial scale, the nature of task, and locomotion may each influence the reference frame of attention. Participants searched for a target on a monitor placed flat on a stand. On each trial, they stood at a different location around the monitor. The target was frequently located in a fixed area of the monitor, but changes in participants' perspective rendered this area random relative to the participants. Under incidental learning conditions, participants failed to acquire environment-centered learning even when (a) the task and display resembled those of a large-scale task and (b) the search task required locomotion. The difficulty in inducing environment-centered learning on a computer underscores the egocentric nature of visual attention. It supports the idea that spatial scale modulates the reference frame of attention.
Collapse
|
20
|
Foley RT, Whitwell RL, Goodale MA. The two-visual-systems hypothesis and the perspectival features of visual experience. Conscious Cogn 2015; 35:225-33. [PMID: 25818025 DOI: 10.1016/j.concog.2015.03.005] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2015] [Revised: 03/03/2015] [Accepted: 03/05/2015] [Indexed: 11/30/2022]
Abstract
Some critics of the two-visual-systems hypothesis (TVSH) argue that it is incompatible with the fundamentally egocentric nature of visual experience (what we call the 'perspectival account'). The TVSH proposes that the ventral stream, which delivers up our visual experience of the world, works in an allocentric frame of reference, whereas the dorsal stream, which mediates the visual control of action, uses egocentric frames of reference. Given that the TVSH is also committed to the claim that dorsal-stream processing does not contribute to the contents of visual experience, it has been argued that the TVSH cannot account for the egocentric features of our visual experience. This argument, however, rests on a misunderstanding about how the operations mediating action and the operations mediating perception are specified in the TVSH. In this article, we emphasize the importance of the 'outputs' of the two-systems to the specification of their respective operations. We argue that once this point is appreciated, it becomes evident that the TVSH is entirely compatible with a perspectival account of visual experience.
Collapse
Affiliation(s)
- Robert T Foley
- The Rotman Institute of Philosophy, The University of Western Ontario, Canada; The Department of Philosophy, The University of Western Ontario, Canada; The Brain and Mind Institute, The University of Western Ontario, Canada.
| | - Robert L Whitwell
- The Department of Psychology, The University of British Columbia, Canada
| | - Melvyn A Goodale
- The Brain and Mind Institute, The University of Western Ontario, Canada; The Department of Psychology, The University of Western Ontario, Canada; The Department of Physiology and Pharmacology, The University of Western Ontario, Canada
| |
Collapse
|
21
|
Abstract
The location of a remembered reach target can be encoded in egocentric and/or allocentric reference frames. Cortical mechanisms for egocentric reach are relatively well described, but the corresponding allocentric representations are essentially unknown. Here, we used an event-related fMRI design to distinguish human brain areas involved in these two types of representation. Our paradigm consisted of three tasks with identical stimulus display but different instructions: egocentric reach (remember absolute target location), allocentric reach (remember target location relative to a visual landmark), and a nonspatial control, color report (report color of target). During the delay phase (when only target location was specified), the egocentric and allocentric tasks elicited widely overlapping regions of cortical activity (relative to the control), but with higher activation in parietofrontal cortex for egocentric task and higher activation in early visual cortex for allocentric tasks. In addition, egocentric directional selectivity (target relative to gaze) was observed in the superior occipital gyrus and the inferior occipital gyrus, whereas allocentric directional selectivity (target relative to a visual landmark) was observed in the inferior temporal gyrus and inferior occipital gyrus. During the response phase (after movement direction had been specified either by reappearance of the visual landmark or a pro-/anti-reach instruction), the parietofrontal network resumed egocentric directional selectivity, showing higher activation for contralateral than ipsilateral reaches. These results show that allocentric and egocentric reach mechanisms use partially overlapping but different cortical substrates and that directional specification is different for target memory versus reach response.
Collapse
|
22
|
Won BY, Jiang YV. Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention. J Exp Psychol Learn Mem Cogn 2014; 41:787-806. [PMID: 25401460 DOI: 10.1037/xlm0000040] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here, we show that the close relationship between these 2 constructs is limited to some but not all forms of spatial attention. In 5 experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval, they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention.
Collapse
|
23
|
Jiang YV, Swallow KM. Changing viewer perspectives reveals constraints to implicit visual statistical learning. J Vis 2014; 14:14.12.3. [PMID: 25294640 DOI: 10.1167/14.12.3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations.
Collapse
Affiliation(s)
- Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Khena M Swallow
- Department of Psychology, Cornell University, Ithaca, NY, USA
| |
Collapse
|
24
|
Jiang YV, Won BY, Swallow KM, Mussack DM. Spatial reference frame of attention in a large outdoor environment. J Exp Psychol Hum Percept Perform 2014; 40:1346-57. [PMID: 24842066 DOI: 10.1037/a0036779] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A central question about spatial attention is whether it is referenced relative to the external environment or to the viewer. This question has received great interest in recent psychological and neuroscience research, with many but not all, finding evidence for a viewer-centered representation. However, these previous findings were confined to computer-based tasks that involved stationary viewers. Because natural search behaviors differ from computer-based tasks in viewer mobility and spatial scale, it is important to understand how spatial attention is coded in the natural environment. To this end, we created an outdoor visual search task in which participants searched a large (690 square ft), concrete, outdoor space to report which side of a coin on the ground faced up. They began search in the middle of the space and were free to move around. Attentional cuing by statistical learning was examined by placing the coin in 1 quadrant of the search space on 50% of the trials. As in computer-based tasks, participants learned and used these regularities to guide search. However, cuing could be referenced to either the environment or the viewer. The spatial reference frame of attention shows greater flexibility in the natural environment than previously found in the lab.
Collapse
|
25
|
Lukic L, Santos-Victor J, Billard A. Learning robotic eye-arm-hand coordination from human demonstration: a coupled dynamical systems approach. BIOLOGICAL CYBERNETICS 2014; 108:223-248. [PMID: 24570352 DOI: 10.1007/s00422-014-0591-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2012] [Accepted: 02/03/2014] [Indexed: 06/03/2023]
Abstract
We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.
Collapse
Affiliation(s)
- Luka Lukic
- Learning Algorithms and Systems Laboratory, Ecole Polytechnique Fédérale de Lausanne, EPFL-STI-I2S-LASA, Station 9, 1015 , Lausanne, Switzerland,
| | | | | |
Collapse
|
26
|
Gaveau V, Pisella L, Priot AE, Fukui T, Rossetti Y, Pélisson D, Prablanc C. Automatic online control of motor adjustments in reaching and grasping. Neuropsychologia 2013; 55:25-40. [PMID: 24334110 DOI: 10.1016/j.neuropsychologia.2013.12.005] [Citation(s) in RCA: 75] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2013] [Revised: 11/16/2013] [Accepted: 12/04/2013] [Indexed: 11/16/2022]
Abstract
Following the princeps investigations of Marc Jeannerod on action-perception, specifically, goal-directed movement, this review article addresses visual and non-visual processes involved in guiding the hand in reaching or grasping tasks. The contributions of different sources of correction of ongoing movements are considered; these include visual feedback of the hand, as well as the often-neglected but important spatial updating and sharpening of goal localization following gaze-saccade orientation. The existence of an automatic online process guiding limb trajectory toward its goal is highlighted by a series of princeps experiments of goal-directed pointing movements. We then review psychophysical, electrophysiological, neuroimaging and clinical studies that have explored the properties of these automatic corrective mechanisms and their neural bases, and established their generality. Finally, the functional significance of automatic corrective mechanisms-referred to as motor flexibility-and their potential use in rehabilitation are discussed.
Collapse
Affiliation(s)
- Valérie Gaveau
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Laure Pisella
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Anne-Emmanuelle Priot
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Institut de recherche biomédicale des armées (IRBA), BP 73, 91223 Brétigny-sur-Orge cedex, France
| | - Takao Fukui
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France
| | - Yves Rossetti
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Denis Pélisson
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Claude Prablanc
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France.
| |
Collapse
|
27
|
Leonards U, Stone S, Mohr C. Line bisection by eye and by hand reveal opposite biases. Exp Brain Res 2013; 228:513-25. [DOI: 10.1007/s00221-013-3583-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2013] [Accepted: 05/16/2013] [Indexed: 11/29/2022]
Affiliation(s)
- Ute Leonards
- School of Experimental Psychology, University of Bristol, 12a Priory Road, Bristol, BS8 1TU, UK.
| | | | | |
Collapse
|
28
|
Spatial reference frame of incidentally learned attention. Cognition 2013; 126:378-90. [DOI: 10.1016/j.cognition.2012.10.011] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Revised: 09/26/2012] [Accepted: 10/08/2012] [Indexed: 11/20/2022]
|
29
|
Abstract
Williams syndrome (WS) is a neurodevelopmental disorder characterized by severe visuospatial deficits, particularly affecting spatial navigation and wayfinding. Creating egocentric (viewer-dependent) and allocentric (viewer-independent) representations of space is essential for the development of these abilities. However, it remains unclear whether egocentric and allocentric representations are impaired in WS. In this study, we investigate egocentric and allocentric frames of reference in this disorder. A WS group (n = 18), as well as a chronological age-matched control group (n = 20), a non-verbal mental age-matched control group (n = 20) and a control group with intellectual disability (n = 17), was tested with a computerized and a 3D spatial judgment task. The results showed that WS participants are impaired when performing both egocentric and allocentric spatial judgments even when compared with mental age-matched control participants. This indicates that a substantial deficit affecting both spatial representations is present in WS. The egocentric impairment is in line with the dorsal visual pathway deficit previously reported in WS. Interestingly, the difficulties found in performing allocentric spatial judgments give important cues to better understand the ventral visual functioning in WS.
Collapse
|
30
|
De Wit MM, Van der Kamp J, Masters RSW. Distinct task-independent visual thresholds for egocentric and allocentric information pick up. Conscious Cogn 2012; 21:1410-8. [PMID: 22868214 DOI: 10.1016/j.concog.2012.07.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2012] [Revised: 07/06/2012] [Accepted: 07/12/2012] [Indexed: 11/29/2022]
Abstract
The dominant view of the ventral and dorsal visual systems is that they subserve perception and action. De Wit, Van der Kamp, and Masters (2011) suggested that a more fundamental distinction might exist between the nature of information exploited by the systems. The present study distinguished between these accounts by asking participants to perform delayed matching (perception), pointing (action) and perceptual judgment responses to masked Müller-Lyer stimuli of varying length. Matching and pointing responses of participants who could not perceptually judge stimulus length at brief durations remained sensitive to veridical stimulus length (egocentric information), but not the illusion (allocentric, context-dependent information), which was effective at long durations. Distinct thresholds for egocentric and allocentric information pick up were thus evident irrespective of whether perception (matching) or action (pointing) responses were required. It was concluded that the dorsal and ventral systems may be delineated fundamentally by fast egocentric- and slower allocentric information pick up, respectively.
Collapse
Affiliation(s)
- Matthieu M De Wit
- Institute of Human Performance, University of Hong Kong, Pokfulam, Hong Kong.
| | | | | |
Collapse
|
31
|
Han X, Byrne P, Kahana M, Becker S. When do objects become landmarks? A VR study of the effect of task relevance on spatial memory. PLoS One 2012; 7:e35940. [PMID: 22586455 PMCID: PMC3346813 DOI: 10.1371/journal.pone.0035940] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2012] [Accepted: 03/26/2012] [Indexed: 11/18/2022] Open
Abstract
We investigated how objects come to serve as landmarks in spatial memory, and more specifically how they form part of an allocentric cognitive map. Participants performing a virtual driving task incidentally learned the layout of a virtual town and locations of objects in that town. They were subsequently tested on their spatial and recognition memory for the objects. To assess whether the objects were encoded allocentrically we examined pointing consistency across tested viewpoints. In three experiments, we found that spatial memory for objects at navigationally relevant locations was more consistent across tested viewpoints, particularly when participants had more limited experience of the environment. When participants' attention was focused on the appearance of objects, the navigational relevance effect was eliminated, whereas when their attention was focused on objects' locations, this effect was enhanced, supporting the hypothesis that when objects are processed in the service of navigation, rather than merely being viewed as objects, they engage qualitatively distinct attentional systems and are incorporated into an allocentric spatial representation. The results are consistent with evidence from the neuroimaging literature that when objects are relevant to navigation, they not only engage the ventral "object processing stream", but also the dorsal stream and medial temporal lobe memory system classically associated with allocentric spatial memory.
Collapse
Affiliation(s)
- Xue Han
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Patrick Byrne
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Michael Kahana
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Suzanna Becker
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
32
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
33
|
Kaleff CR, Aschidamini C, Baron J, Di Leone CN, Leone CN, Canavarro S, Vargas CD. Semi-automatic measurement of visual verticality perception in humans reveals a new category of visual field dependency. Braz J Med Biol Res 2011; 44:754-61. [PMID: 21779636 DOI: 10.1590/s0100-879x2011007500090] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2010] [Accepted: 04/04/2011] [Indexed: 11/21/2022] Open
Abstract
Previous assessment of verticality by means of rod and rod and frame tests indicated that human subjects can be more (field dependent) or less (field independent) influenced by a frame placed around a tilted rod. In the present study we propose a new approach to these tests. The judgment of visual verticality (rod test) was evaluated in 50 young subjects (28 males, ranging in age from 20 to 27 years) by randomly projecting a luminous rod tilted between -18 and +18° (negative values indicating left tilts) onto a tangent screen. In the rod and frame test the rod was displayed within a luminous fixed frame tilted at +18 or -18°. Subjects were instructed to verbally indicate the rod's inclination direction (forced choice). Visual dependency was estimated by means of a Visual Index calculated from rod and rod and frame test values. Based on this index, volunteers were classified as field dependent, intermediate and field independent. A fourth category was created within the field-independent subjects for whom the amount of correct guesses in the rod and frame test exceeded that of the rod test, thus indicating improved performance when a surrounding frame was present. In conclusion, the combined use of subjective visual vertical and the rod and frame test provides a specific and reliable form of evaluation of verticality in healthy subjects and might be of use to probe changes in brain function after central or peripheral lesions.
Collapse
Affiliation(s)
- C R Kaleff
- Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, RJ, Brasil
| | | | | | | | | | | | | |
Collapse
|
34
|
Bouquet CA, Shipley TF, Capa RL, Marshall PJ. Motor contagion: goal-directed actions are more contagious than non-goal-directed actions. Exp Psychol 2011; 58:71-8. [PMID: 20494864 DOI: 10.1027/1618-3169/a000069] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Recent theories posit a mirror-matching system mapping observed actions onto one's own motor system. Determining whether this system makes a distinction between goal-directed and non-goal-directed actions is crucial for the understanding of its function. The present study tested whether motor interference between observed and executed actions, which is thought to be an index of perceptual-motor matching, depends on the presence of goals in the observed action. Participants executed sinusoidal arm movements while observing a video of another person making similar or different movements. In certain conditions, elements representing goals for the observed movement were superimposed on the video displays. Overall, observing an incongruent movement interfered with movement execution. This interference was markedly increased when the observed incongruent movement was directed toward a visible goal, suggesting a greater perceptual-motor matching during observation of goal-directed versus non-goal-directed actions. This finding supports an action-reconstruction model of mirror system function rather than the traditional direct-matching model.
Collapse
Affiliation(s)
- Cédric A Bouquet
- Centre de Recherches sur la Cognition et l'Apprentissage - CNRS UMR 6234, University of Poitiers, France.
| | | | | | | |
Collapse
|
35
|
de Wit M, van der Kamp J, Masters RSW. Delayed pointing movements to masked Müller-Lyer figures are affected by target size but not the illusion. Neuropsychologia 2011; 49:1903-9. [PMID: 21420989 DOI: 10.1016/j.neuropsychologia.2011.03.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2010] [Revised: 03/04/2011] [Accepted: 03/14/2011] [Indexed: 10/18/2022]
Abstract
There is ongoing debate with respect to interpretation of the finding that, in contrast to perceptual size judgments, actions are relatively unaffected by the Müller-Lyer illusion. In normal unrestricted viewing situations observers cannot perform an action directed at an object without simultaneously perceiving the object - this makes it difficult to unequivocally establish whether observed effects are a function of vision for perception, vision for action, a combination of both, or of a single all-purpose visual system. However, there is evidence that observers are capable of performing actions towards objects of which they are not consciously aware, implying that two distinct visual thresholds may exist; one accompanying vision for action and one accompanying vision for perception. To investigate this possibility we created a situation in which visual information was presented below the perception threshold, but above the purported action threshold, allowing examination of action responses independent of contributions from vision for perception. Following a perceptual categorization task, participants performed delayed pointing movements towards briefly exposed masked Müller-Lyer targets of different sizes. When the targets were presented below the perception threshold, participants were unable to discriminate between them, yet their delayed pointing movements were affected by target size (but not the illusion). The results imply that vision for action is functional even after a delay and/or that the pickup of egocentric information is associated with a lower visual threshold than the pickup of allocentric information.
Collapse
Affiliation(s)
- Matthieu de Wit
- Institute of Human Performance, University of Hong Kong, 111-113 Pokfulam Road, Hong Kong SAR, China.
| | | | | |
Collapse
|
36
|
Gaze-centered spatial updating of reach targets across different memory delays. Vision Res 2011; 51:890-7. [PMID: 21219923 DOI: 10.1016/j.visres.2010.12.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2010] [Revised: 11/26/2010] [Accepted: 12/22/2010] [Indexed: 11/22/2022]
Abstract
Previous research has demonstrated that remembered targets for reaching are coded and updated relative to gaze, at least when the reaching movement is made soon after the target has been extinguished. In this study, we want to test whether reach targets are updated relative to gaze following different time delays. Reaching endpoints systematically varied as a function of gaze relative to target irrespective of whether the action was executed immediately or after a delay of 5 s, 8 s or 12 s. The present results suggest that memory traces for reach targets continue to be coded in a gaze-dependent reference frame if no external cues are present.
Collapse
|
37
|
Chen Y, Byrne P, Crawford JD. Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia 2011; 49:49-60. [DOI: 10.1016/j.neuropsychologia.2010.10.031] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2010] [Revised: 10/18/2010] [Accepted: 10/29/2010] [Indexed: 10/18/2022]
|
38
|
Michaels CF. Information, Perception, and Action: What Should Ecological Psychologists Learn From Milner and Goodale (1995)? ECOLOGICAL PSYCHOLOGY 2010. [DOI: 10.1207/s15326969eco1203_4] [Citation(s) in RCA: 63] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
39
|
Bimanual movement control is moderated by fixation strategies. Exp Brain Res 2010; 202:837-50. [DOI: 10.1007/s00221-010-2189-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2009] [Accepted: 02/04/2010] [Indexed: 10/19/2022]
|
40
|
Zhao JY, Yu G. Adolescents' Cognition of Projectile Motion: A Pilot Study. Percept Mot Skills 2009; 108:349-61. [DOI: 10.2466/pms.108.2.349-361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Previous work on the development of intuitive knowledge about projectile motion has shown a dissociation between action knowledge expressed on an action task and conceptual knowledge expressed on a judgment task for young children. The research investigated the generality of dissociation for adolescents. On the action task, participants were asked to swing Ball A of a bifilar pendulum to some height then release it to collide with Ball B, which was projected to hit a target. On the judgment task, participants indicated orally the desired swing angle at which Ball A should be released so that Ball B would strike a target. Unlike previous findings with adults, the adolescents showed conceptual difficulties on the judgment task and well-developed action knowledge on the action task, which suggests dissociation between the two knowledge systems is also present among adolescents. The result further supports the hypothesis that the two knowledge systems follow different developmental trajectories and at different speeds.
Collapse
Affiliation(s)
- Jun-Yan Zhao
- Institute of Psychology, Chinese Academy of Sciences, Graduate School of the Chinese Academy of Sciences
| | - Guoliang Yu
- Institute of Psychology, Renmin University of China
| |
Collapse
|
41
|
van Doorn H, van der Kamp J, de Wit M, Savelsbergh GJ. Another look at the Müller-Lyer illusion: Different gaze patterns in vision for action and perception. Neuropsychologia 2009; 47:804-12. [DOI: 10.1016/j.neuropsychologia.2008.12.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2008] [Revised: 11/27/2008] [Accepted: 12/03/2008] [Indexed: 10/21/2022]
|
42
|
|
43
|
Goodale MA, Gonzalez CLR, Króliczak G. Action Rules: Why the Visual Control of Reaching and Grasping is Not Always Influenced by Perceptual Illusions. Perception 2008; 37:355-66. [DOI: 10.1068/p5876] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
It is generally accepted that vision first evolved for the distal control of movement and that perception or ‘representational’ vision emerged much later. Vision-for-action operates in real time and uses egocentric frames of reference and the real metrics of the world. Vision-for-perception can operate over longer time scales and is much more scene-based in its computations. These differences in the timing and metrics of the two systems have been examined in experiments that have looked at the way in which each system deals with visual illusions. Although controversial, the consensus is that actions such as grasping and reaching are often unaffected by high-level pictorial illusions, which by definition affect perception. However, recent experiments have shown that, for actions to escape the effects of such illusions, they must be highly practiced actions, preferably with the right hand, and must be directed in real time at visible targets. This latter finding suggests that some of the critical components of the encapsulated (bottom – up) systems that mediate the visual control of skilled reaching and grasping movements are lateralised to the left hemisphere.
Collapse
Affiliation(s)
- Melvyn A Goodale
- CIHR Group on Action and Perception, Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Claudia L R Gonzalez
- CIHR Group on Action and Perception, Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Grzegorz Króliczak
- CIHR Group on Action and Perception, Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| |
Collapse
|
44
|
Pijpers JR, Oudejans RRD, Bakker FC. Changes in the perception of action possibilities while climbing to fatigue on a climbing wall. J Sports Sci 2007; 25:97-110. [PMID: 17127585 DOI: 10.1080/02640410600630894] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
In two experiments we examined changes in the perception of action possibilities as a function of exertion. In Experiment 1, participants repeatedly climbed on a climbing wall in a series of trials that progressively increased in number to 10 trials, resulting in increased exertion. Before and during climbing, the participants judged their maximum reaching height and perceived exertion. On a separate day, participants climbed another 10 trials while performing actual maximum reaches. Higher perceived exertion was associated with decreases in perceived maximum reach while the actual reaches did not decrease. However, the perceptual changes occurred early during task execution when the participants were not yet fatigued. When exertion set in, neither perceived nor actual maximum reaching appeared to be affected. In Experiment 2, we included exhaustion trials. The findings replicated the early changes in perception observed in Experiment 1, which may be explained by hands-on experience with the task. Furthermore, while climbing to exhaustion, perceptual judgements largely changed in keeping with changes in the actual maximum reach. Thus, there appeared to be a functional relationship between participants' actual action capabilities, rather than their state of physical fatigue per se, and perceived action possibilities.
Collapse
Affiliation(s)
- J R Pijpers
- Institute for Fundamental and Clinical Human Movement Sciences, Faculty of Human Movement Sciences, Vrije Universiteit, Amsterdam, The Netherlands.
| | | | | |
Collapse
|
45
|
Zaehle T, Jordan K, Wüstenberg T, Baudewig J, Dechent P, Mast FW. The neural basis of the egocentric and allocentric spatial frame of reference. Brain Res 2007; 1137:92-103. [PMID: 17258693 DOI: 10.1016/j.brainres.2006.12.044] [Citation(s) in RCA: 171] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2006] [Revised: 10/18/2006] [Accepted: 12/10/2006] [Indexed: 11/29/2022]
Abstract
The present study examines the functional and anatomical underpinnings of egocentric and allocentric coding of spatial coordinates. For this purpose, we set up a functional magnet resonance imaging experiment using verbal descriptions of spatial relations either with respect to the listener (egocentric) or without any body-centered relations (allocentric) to induce the two different spatial coding strategies. We aimed to identify and distinguish the neuroanatomical correlates of egocentric and allocentric spatial coding without any possible influences by visual stimulation. Results from sixteen participants show a general involvement of a bilateral fronto-parietal network associated with spatial information processing. Furthermore, the egocentric and allocentric conditions gave rise to activations in primary visual areas in both hemispheres. Moreover, data show separate neural circuits mediating different spatial coding strategies. While egocentric spatial coding mainly recruits the precuneus, allocentric coding of space activates a network comprising the right superior and inferior parietal lobe and the ventrolateral occipito-temporal cortex bilaterally. Furthermore, bilateral hippocampal involvement was observed during allocentric, but not during egocentric spatial processing. Our results demonstrate that the processing of egocentric spatial relations is mediated by medial superior-posterior areas, whereas allocentric spatial coding requires an additional involvement of right parietal cortex, the ventral visual stream and the hippocampal formation. These data suggest that a hierarchically organized processing system exists in which the egocentric spatial coding requires only a subsystem of the processing resources of the allocentric condition.
Collapse
Affiliation(s)
- Tino Zaehle
- Department of Psychology, Division Neuropsychology, University of Zurich, Binzmühlestrasse 14, CH-8050 Zürich, Switzerland.
| | | | | | | | | | | |
Collapse
|
46
|
Affiliation(s)
- Gustav Kuhn
- Department of Psychology, University of Durham, South Road, Durham DH1 3LE, UK.
| | | |
Collapse
|
47
|
Lavrysen A, Helsen WF, Elliott D, Buekers MJ, Feys P, Heremans E. The type of visual information mediates eye and hand movement bias when aiming to a Müller–Lyer illusion. Exp Brain Res 2006; 174:544-54. [PMID: 16645876 DOI: 10.1007/s00221-006-0484-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2005] [Accepted: 03/31/2006] [Indexed: 11/29/2022]
Abstract
Aiming bias typically influences perception but action towards the illusory stimulus is often unaffected. Recent studies, however, have shown that the type of information available is a predictor for the expression of action bias. In the present cyclical aiming experiment, the type of information (retinal and extra-retinal) was manipulated in order to investigate the differential contributions of different cues on both eye and hand movements. The results showed that a Müller-Lyer illusion caused very similar perturbation effects on hand and eye-movement amplitudes and this bias was mediated by the type of information available on-line. Interestingly, the impact of the illusion on goal-directed movement was smaller, when information about the figure but not the hand was provided for on-line control. Saccadic information did not influence the size of the effect of a Müller-Lyer illusion on hand movements. Furthermore, the illusions did not alter the eye-hand coordination pattern. The timing of saccade termination was strongly linked to hand movement kinematics. The present results are not consistent with current dichotomous models of perception and action or movement planning and on-line control. Rather, they suggest that the type of information available for movement planning mediates the size of the illusory effects. Overall, it has been demonstrated that movement planning and control processes are versatile operations, which have the ability to adapt to the type of information available.
Collapse
Affiliation(s)
- Ann Lavrysen
- Department of Biomedical Kinesiology, Faculty of Kinesiology and Rehabilitation Sciences, Motor Learning Laboratory, Katholieke Universiteit Leuven, Tervuursevest 101, 3001 Leuven, Belgium.
| | | | | | | | | | | |
Collapse
|
48
|
Rolheiser TM, Binsted G, Brownell KJ. Visuomotor representation decay: influence on motor systems. Exp Brain Res 2006; 173:698-707. [PMID: 16676170 DOI: 10.1007/s00221-006-0453-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2005] [Accepted: 03/01/2006] [Indexed: 11/25/2022]
Abstract
The contribution of ventral stream information to the variability of movement has been the focus of much attention, and has provided numerous researchers with conflicting results. These results have been obtained through the use of discrete pointing movements, and as such, do not offer any explanation regarding how ventral stream information contributes to movement variability over time. The present study examined the contribution of ventral stream information to movement variability in three tasks: Hand-only movement, eye-only movement, and an eye-hand coordinated task. Participants performed a continuous reciprocal tapping task to two point-of-light targets for 10 s. The targets were visible for the first 5 s, at which point vision of the targets was removed. Movement variability was similar in all conditions for the initial 5-s interval. The no-vision condition (final 5 s) can be summarized as follows: ventral stream information contributed to an initial significant increase in variability across motor systems, though the different motor systems were able to preserve ventral information integrity differently. The results of these studies can be attributed to the behavioral and cortical networks that underlie the saccadic and manual motor systems.
Collapse
Affiliation(s)
- Tyler M Rolheiser
- College of Kinesiology, University of Saskatchewan, Saskatoon, Canada.
| | | | | |
Collapse
|
49
|
Kudoh N. Dissociation between visual perception of allocentric distance and visually directed walking of its extent. Perception 2006; 34:1399-416. [PMID: 16355744 DOI: 10.1068/p5444] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Walking without vision to previously viewed targets was compared with visual perception of allocentric distance in two experiments. Experimental evidence had shown that physically equal distances in a sagittal plane on the ground were perceptually underestimated as compared with those in a frontoparallel plane, even under full-cue conditions. In spite of this perceptual anisotropy of space, Loomis et al (1992 Journal of Experimental Psychology. Human Perception and Performance 18 906-921) found that subjects could match both types of distances in a blind-walking task. In experiment 1 of the present study, subjects were required to reproduce the extent of allocentric distance between two targets by either walking towards the targets, or by walking in a direction incompatible with the locations of the targets. The latter condition required subjects to derive an accurate allocentric distance from information based on the perceived locations of the two targets. The walked distance in the two conditions was almost identical whether the two targets were presented in depth (depth-presentation condition) or in the frontoparallel plane (width-presentation condition). The results of a perceptual-matching task showed that the depth distances had to be much greater than the width distances in order to be judged to be equal in length (depth compression). In experiment 2, subjects were required to reproduce the extent of allocentric distance from the viewing point by blindly walking in a direction other than toward the targets. The walked distance in the depth-presentation condition was shorter than that in the width-presentation condition. This anisotropy in motor responses, however, was mainly caused by apparent overestimation of length oriented in width, not by depth compression. In addition, the walked distances were much better scaled than those in experiment 1. These results suggest that the perceptual and motor systems share a common representation of the location of targets, whereas a dissociation in allocentric distance exists between the two systems in full-cue conditions.
Collapse
Affiliation(s)
- Nobuo Kudoh
- Department of Psychology, Faculty of Humanities, Niigata University, Ikarashi, Niigata 950-2181, Japan.
| |
Collapse
|
50
|
Mendoza J, Hansen S, Glazebrook CM, Keetch KM, Elliott D. Visual illusions affect both movement planning and on-line control: A multiple cue position on bias and goal-directed action. Hum Mov Sci 2005; 24:760-73. [PMID: 16223538 DOI: 10.1016/j.humov.2005.09.002] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Over the last decade, there has been an interest in the impact of visual illusions on the control of action. Much of this work has been motivated by Milner and Goodale's two visual system model of visual processing. This model is based on a hypothesized dissociation between cognitive judgments and the visual control of action. It holds that action is immune to the visual context that provides the basis for the illusion-induced bias associated with cognitive judgments. Recently, Glover has challenged this position and has suggested that movement planning, but not movement execution is susceptible to visual illusions. Research from our lab is inconsistent with both models of visual-motor processing. With respect to the planning and control model, kinematic evidence shows that the impact of an illusion on manual aiming increases as the limb approaches the target. For the Ebbinghaus illusion, this involved a decrease in the time after peak velocity to accommodate the 'perceived' size of the target. For the Müller-Lyer illusion, the influence of the figure's tails increased from peak velocity to the end of the movement. Although our findings contradict a strong version of the two visual systems hypothesis, we did find dissociations between perception and action in another experiment. In this Müller-Lyer study, perceptual decisions were influenced by misjudgment of extent, while action was influenced by misjudgment of target position. Overall, our findings are consistent with the idea that it is often necessary to use visual context to make adjustments to ongoing movements.
Collapse
Affiliation(s)
- Jocelyn Mendoza
- Department of Kinesiology, McMaster University, 1280 Main St. West, Hamilton ON, Canada L8S 4K1
| | | | | | | | | |
Collapse
|