76
|
Snow JC, Pettypiece CE, McAdam TD, McLean AD, Stroman PW, Goodale MA, Culham JC. Bringing the real world into the fMRI scanner: repetition effects for pictures versus real objects. Sci Rep 2011; 1:130. [PMID: 22355647 PMCID: PMC3216611 DOI: 10.1038/srep00130] [Citation(s) in RCA: 105] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2011] [Accepted: 10/05/2011] [Indexed: 11/09/2022] Open
Abstract
Our understanding of the neural underpinnings of perception is largely built upon studies employing 2-dimensional (2D) planar images. Here we used slow event-related functional imaging in humans to examine whether neural populations show a characteristic repetition-related change in haemodynamic response for real-world 3-dimensional (3D) objects, an effect commonly observed using 2D images. As expected, trials involving 2D pictures of objects produced robust repetition effects within classic object-selective cortical regions along the ventral and dorsal visual processing streams. Surprisingly, however, repetition effects were weak, if not absent on trials involving the 3D objects. These results suggest that the neural mechanisms involved in processing real objects may therefore be distinct from those that arise when we encounter a 2D representation of the same items. These preliminary results suggest the need for further research with ecologically valid stimuli in other imaging designs to broaden our understanding of the neural mechanisms underlying human vision.
Collapse
|
77
|
Buckingham G, Ranger NS, Goodale MA. Handedness, laterality and the size-weight illusion. Cortex 2011; 48:1342-50. [PMID: 22019202 DOI: 10.1016/j.cortex.2011.09.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2011] [Revised: 06/30/2011] [Accepted: 09/20/2011] [Indexed: 10/17/2022]
Abstract
The goal of this study was to determine how handedness and lifting hand influence the way in which we lift objects and perceive their weights. To this end, we examined the fingertip forces and perceptual judgements of 30 left-handers and 30 right-handers during lifts of specially constructed 'size-weight illusion' (SWI) cubes with their left and right hands. All participants completed a series of lifts first with one hand and then the other, so we could additionally examine asymmetries in the retention and transfer of force information between the limbs. Right-handers experienced a larger illusion with their left hand than they did with their right hand, whereas left-handers showed no such asymmetry in their illusions. The perceptual illusion's independence from the application of fingertip force was highlighted by an unexpected lack of asymmetry in terms of fingertip force scaling. Left- and right-handers showed no dominant hand advantage in this task - they were no more skilled at correcting their fingertip force errors with their preferred hand than they were with their non-preferred hand. In addition, although no asymmetries were observed with regard to the most efficient direction of intermanual transfer, the right-handed individuals transferred force information between the hands more effectively than the left-handers. Overall, these findings indicate that hand dominance does not affect the control of the fingertip forces, suggesting that existing models of cerebral laterality must be re-visited to consider kinetic (i.e., related to forces), as well as kinematic (i.e., related to movement) variables.
Collapse
|
78
|
Thaler L, Goodale MA. Neural substrates of visual spatial coding and visual feedback control for hand movements in allocentric and target-directed tasks. Front Hum Neurosci 2011; 5:92. [PMID: 21941474 PMCID: PMC3171072 DOI: 10.3389/fnhum.2011.00092] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2011] [Accepted: 08/13/2011] [Indexed: 11/29/2022] Open
Abstract
Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements.
Collapse
|
79
|
Goodale MA. Transforming vision into action. Vision Res 2011; 51:1567-87. [PMID: 20691202 DOI: 10.1016/j.visres.2010.07.027] [Citation(s) in RCA: 202] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2010] [Revised: 07/20/2010] [Accepted: 07/29/2010] [Indexed: 11/27/2022]
|
80
|
Rossit S, Fraser JA, Teasell R, Malhotra PA, Goodale MA. Impaired delayed but preserved immediate grasping in a neglect patient with parieto-occipital lesions. Neuropsychologia 2011; 49:2498-504. [DOI: 10.1016/j.neuropsychologia.2011.04.030] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2011] [Revised: 04/20/2011] [Accepted: 04/27/2011] [Indexed: 10/18/2022]
|
81
|
Thaler L, Arnott SR, Goodale MA. Neural correlates of natural human echolocation in early and late blind echolocation experts. PLoS One 2011; 6:e20162. [PMID: 21633496 PMCID: PMC3102086 DOI: 10.1371/journal.pone.0020162] [Citation(s) in RCA: 112] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2011] [Accepted: 04/13/2011] [Indexed: 12/04/2022] Open
Abstract
Background A small number of blind people are adept at echolocating silent objects simply by producing mouth clicks and listening to the returning echoes. Yet the neural architecture underlying this type of aid-free human echolocation has not been investigated. To tackle this question, we recruited echolocation experts, one early- and one late-blind, and measured functional brain activity in each of them while they listened to their own echolocation sounds. Results When we compared brain activity for sounds that contained both clicks and the returning echoes with brain activity for control sounds that did not contain the echoes, but were otherwise acoustically matched, we found activity in calcarine cortex in both individuals. Importantly, for the same comparison, we did not observe a difference in activity in auditory cortex. In the early-blind, but not the late-blind participant, we also found that the calcarine activity was greater for echoes reflected from surfaces located in contralateral space. Finally, in both individuals, we found activation in middle temporal and nearby cortical regions when they listened to echoes reflected from moving targets. Conclusions These findings suggest that processing of click-echoes recruits brain regions typically devoted to vision rather than audition in both early and late blind echolocation experts.
Collapse
|
82
|
Gallivan JP, Chapman CS, Wood DK, Milne JL, Ansari D, Culham JC, Goodale MA. One to four, and nothing more: nonconscious parallel individuation of objects during action planning. Psychol Sci 2011; 22:803-11. [PMID: 21562312 DOI: 10.1177/0956797611408733] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Much of the current understanding about the capacity limits on the number of objects that can be simultaneously processed comes from studies of visual short-term memory, attention, and numerical cognition. Consistent reports suggest that, despite large variability in the perceptual tasks administered (e.g., object tracking, counting), a limit of three to four visual items can be independently processed in parallel. In the research reported here, we asked whether this limit also extends to the domain of action planning. Using a unique rapid visuomotor task and a novel analysis of reach trajectories, we demonstrated an upper limit to the number of targets that can be simultaneously encoded for action, a capacity limit that also turns out to be no more than three to four. Our findings suggest that conscious perceptual processing and nonconscious movement planning are constrained by a common underlying mechanism limited by the number of items that can be simultaneously represented.
Collapse
|
83
|
Thaler L, Goodale MA. Reaction times for allocentric movements are 35 ms slower than reaction times for target-directed movements. Exp Brain Res 2011; 211:313-28. [PMID: 21516448 DOI: 10.1007/s00221-011-2691-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2011] [Accepted: 04/06/2011] [Indexed: 11/24/2022]
Abstract
Many movements that people perform every day are directed at visual targets, e.g., when we press an elevator button. However, many other movements are not target-directed, but are based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing or copying. Here, show a reaction time difference between these two types of movements in four separate experiments. In Exp. 1, subjects moved their eyes freely and used direct hand movements. In Exp. 2, subjects moved their eyes freely and their movements were tool-mediated (computer mouse). In Exp. 3, subjects fixated a central target and the visual field in which visual information was presented was manipulated. Experiment 4 was identical to Exp. 3 except for the fact that visual information about targets disappeared before movement onset. In all four experiments, reaction times in the allocentric task were approximately 35 ms slower than they were in the target-directed task. We suggest that this difference in reaction time between the two tasks reflects the fact that allocentric, but not target-directed, movements recruit the ventral stream, in particular lateral occipital cortex, which increases processing time. We also observed an advantage for movements made in the lower visual field as measured by movement variability, whether or not those movements were allocentric or target-directed. This latter result, we argue, reflects the role of the dorsal visual stream in the online control of movements in both kinds of tasks.
Collapse
|
84
|
Strother L, Mathuranath PS, Aldcroft A, Lavell C, Goodale MA, Vilis T. Face inversion reduces the persistence of global form and its neural correlates. PLoS One 2011; 6:e18705. [PMID: 21525978 PMCID: PMC3078111 DOI: 10.1371/journal.pone.0018705] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2011] [Accepted: 03/08/2011] [Indexed: 11/29/2022] Open
Abstract
Face inversion produces a detrimental effect on face recognition. The extent to which the inversion of faces and other kinds of objects influences the perceptual binding of visual information into global forms is not known. We used a behavioral method and functional MRI (fMRI) to measure the effect of face inversion on visual persistence, a type of perceptual memory that reflects sustained awareness of global form. We found that upright faces persisted longer than inverted versions of the same images; we observed a similar effect of inversion on the persistence of animal stimuli. This effect of inversion on persistence was evident in sustained fMRI activity throughout the ventral visual hierarchy, including the lateral occipital area (LO), two face-selective visual areas--the fusiform face area (FFA) and the occipital face area (OFA)--and several early visual areas. V1 showed the same initial fMRI activation to upright and inverted forms but this activation lasted longer for upright stimuli. The inversion effect on persistence-related fMRI activity in V1 and other retinotopic visual areas demonstrates that higher-tier visual areas influence early visual processing via feedback. This feedback effect on figure-ground processing is sensitive to the orientation of the figure.
Collapse
|
85
|
Westwood DA, Goodale MA. Converging evidence for diverging pathways: Neuropsychology and psychophysics tell the same story. Vision Res 2011; 51:804-11. [DOI: 10.1016/j.visres.2010.10.014] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2010] [Revised: 09/20/2010] [Accepted: 10/07/2010] [Indexed: 10/18/2022]
|
86
|
Whitwell RL, Striemer CL, Nicolle DA, Goodale MA. Grasping the non-conscious: Preserved grip scaling to unseen objects for immediate but not delayed grasping following a unilateral lesion to primary visual cortex. Vision Res 2011; 51:908-24. [DOI: 10.1016/j.visres.2011.02.005] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2010] [Revised: 01/20/2011] [Accepted: 02/09/2011] [Indexed: 10/18/2022]
|
87
|
Cate AD, Goodale MA, Köhler S. The role of apparent size in building- and object-specific regions of ventral visual cortex. Brain Res 2011; 1388:109-22. [PMID: 21329676 DOI: 10.1016/j.brainres.2011.02.022] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2010] [Revised: 11/23/2010] [Accepted: 02/09/2011] [Indexed: 11/27/2022]
Abstract
Images of buildings and manipulable objects have been found to activate distinct regions in the ventral visual pathway. Yet, many non-categorical properties distinguish buildings from common everyday objects, and perhaps the most salient of these is size. In this fMRI study, we investigated whether or not changes in perceived scale can account for some of the differences in category-specific responses, independent of the influence of semantic or retinotopic image properties. We used independent scans to localize object-specific ROIs in lateral occipital cortex (LO) and scene-specific ROIs in the parahippocampal place area (PPA) and posterior collateral sulcus. We then contrasted the effects of stimulus category and perceived size/distance in these regions in a factorial design. Participants performed an oddball detection task while viewing images of objects, buildings, and planar rectangles both with and without a background that indicated stimulus size/distance via simple pictorial cues. The analyses of fMRI responses showed effects of perceived size/distance in addition to effects of category in LO and the PPA. Interestingly, when simple rectangles were presented in a control condition against the background that indicated size/distance, LO in the right hemisphere responded significantly more to the small/close rectangles than to the large/far ones, in spite of the fact that the rectangles themselves were identical. These findings suggest that ventral stream regions that show category specificity are modulated by the perceived size and distance of visual stimuli.
Collapse
|
88
|
Sedda A, Monaco S, Bottini G, Goodale MA. Integration of visual and auditory information for hand actions: preliminary evidence for the contribution of natural sounds to grasping. Exp Brain Res 2011; 209:365-74. [PMID: 21290243 DOI: 10.1007/s00221-011-2559-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2010] [Accepted: 01/13/2011] [Indexed: 10/18/2022]
Abstract
When we reach out to grasp objects, vision plays a major role in the control of our movements. Nevertheless, other sensory modalities contribute to the fine-tuning of our actions. Even olfaction has been shown to play a role in the scaling of movements directed at objects. Much less is known about how auditory information might be used to program grasping movements. The aim of our study was to investigate how the sound of a target object affects the planning of grasping movements in normal right-handed subjects. We performed an experiment in which auditory information could be used to infer size of targets when the availability of visual information was varied from trial to trial. Classical kinematic parameters (such as grip aperture) were measured to evaluate the influence of auditory information. In addition, an optimal inference modeling was applied to the data. The scaling of grip aperture indicated that the introduction of sound allowed subjects to infer the size of the object when vision was not available. Moreover, auditory information affected grip aperture even when vision was available. Our findings suggest that the differences in the natural impact sounds of objects of different sizes being placed on a surface can be used to plan grasping movements.
Collapse
|
89
|
Thaler L, Goodale MA. The Role of Online Visual Feedback for the Control of Target-Directed and Allocentric Hand Movements. J Neurophysiol 2011; 105:846-59. [PMID: 21160005 DOI: 10.1152/jn.00743.2010] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.
Collapse
|
90
|
Arnott SR, Heywood CA, Kentridge RW, Goodale MA. Voice recognition and the posterior cingulate: An fMRI study of prosopagnosia. J Neuropsychol 2011; 2:269-86. [DOI: 10.1348/174866407x246131] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
91
|
Chapman CS, Gallivan JP, Wood DK, Milne JL, Culham JC, Goodale MA. Short-term motor plasticity revealed in a visuomotor decision-making task. Behav Brain Res 2010; 214:130-4. [DOI: 10.1016/j.bbr.2010.05.012] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2010] [Revised: 05/04/2010] [Accepted: 05/07/2010] [Indexed: 11/15/2022]
|
92
|
Abstract
The dorsal visual stream codes information crucial to the planning and online control of target-directed reaching movements in dynamic and cluttered environments. Two specific dorsally mediated abilities are the avoidance of obstacles and the online correction for changes in target location. The current study was designed to test whether or not both of these abilities can be performed concurrently. Participants made reaches to touch a target that, on two-thirds of the trials, remained stationary and on the other third "jumped" at movement onset. Importantly, on target-jump trials, a single object (in one of four positions) sometimes became an obstacle that interfered with the reach. When a target jump caused an object to suddenly become an obstacle, we observed clear spatial avoidance behavior, an effect that was not present when the target jumped but the object did not become an obstacle. This automatic spatial avoidance was accompanied by significant velocity reductions only when the risk for collision with the obstacle was high, suggesting an "intelligent" encoding of potential obstacle locations. We believe that this provides strong evidence that the entire workspace is encoded during reach planning and that the representation of all objects in the workspace is available to the automatic online correction system.
Collapse
|
93
|
Cavina-Pratesi C, Monaco S, Fattori P, Galletti C, McAdam TD, Quinlan DJ, Goodale MA, Culham JC. Functional magnetic resonance imaging reveals the neural substrates of arm transport and grip formation in reach-to-grasp actions in humans. J Neurosci 2010; 30:10306-23. [PMID: 20685975 PMCID: PMC6634677 DOI: 10.1523/jneurosci.2023-10.2010] [Citation(s) in RCA: 250] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2010] [Accepted: 05/26/2010] [Indexed: 11/21/2022] Open
Abstract
Picking up a cup requires transporting the arm to the cup (transport component) and preshaping the hand appropriately to grasp the handle (grip component). Here, we used functional magnetic resonance imaging to examine the human neural substrates of the transport component and its relationship with the grip component. Participants were shown three-dimensional objects placed either at a near location, adjacent to the hand, or at a far location, within reach but not adjacent to the hand. Participants performed three tasks at each location as follows: (1) touching the object with the knuckles of the right hand; (2) grasping the object with the right hand; or (3) passively viewing the object. The transport component was manipulated by positioning the object in the far versus the near location. The grip component was manipulated by asking participants to grasp the object versus touching it. For the first time, we have identified the neural substrates of the transport component, which include the superior parieto-occipital cortex and the rostral superior parietal lobule. Consistent with past studies, we found specialization for the grip component in bilateral anterior intraparietal sulcus and left ventral premotor cortex; now, however, we also find activity for the grasp even when no transport is involved. In addition to finding areas specialized for the transport and grip components in parietal cortex, we found an integration of the two components in dorsal premotor cortex and supplementary motor areas, two regions that may be important for the coordination of reach and grasp.
Collapse
|
94
|
Chapman CS, Gallivan JP, Wood DK, Milne JL, Culham JC, Goodale MA. Reaching for the unknown: Multiple target encoding and real-time decision-making in a rapid reach task. Cognition 2010; 116:168-76. [DOI: 10.1016/j.cognition.2010.04.008] [Citation(s) in RCA: 109] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2009] [Revised: 04/06/2010] [Accepted: 04/20/2010] [Indexed: 11/30/2022]
|
95
|
Buckingham G, Goodale MA. The influence of competing perceptual and motor priors in the context of the size–weight illusion. Exp Brain Res 2010; 205:283-8. [DOI: 10.1007/s00221-010-2353-9] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2010] [Accepted: 06/27/2010] [Indexed: 10/19/2022]
|
96
|
Servos P, Jakobson LS, Goodale MA. Near, Far, or In Between?—Target Edges and the Transport Component of Prehension. J Mot Behav 2010; 30:90-3. [DOI: 10.1080/00222899809601325] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
97
|
Striemer CL, Yukovsky J, Goodale MA. Can intention override the "automatic pilot"? Exp Brain Res 2010; 202:623-32. [PMID: 20135102 DOI: 10.1007/s00221-010-2169-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2009] [Accepted: 01/12/2010] [Indexed: 11/25/2022]
Abstract
Previous research has suggested that the visuomotor system possesses an "automatic pilot" which allows people to make rapid online movement corrections in response to sudden changes in target position. Importantly, the automatic pilot has been shown to operate in the absence of visual awareness, and even under circumstances in which people are explicitly asked not to correct their ongoing movement. In the current study, we investigated the extent to which the automatic pilot could be "disengaged" by explicitly instructing participants to ignore the target jump (i.e., "NO-GO"), by manipulating the order in which the two tasks were completed (i.e., either "GO" or NO-GO first), and by manipulating the proportion of trials in which the target jumped. The results indicated that participants made fewer corrections in response to the target jump when they were asked not to correct their movement (i.e., NO-GO), and when they completed the NO-GO task prior to the task in which they were asked to correct their movement when the target jumped (i.e., the GO task). However, increasing the proportion of jumping targets had only a minimal influence on performance. Critically, participants still made a significant number of unintended corrections (i.e., errors) in the NO-GO tasks, even under explicit instructions not to correct their movement if the target jumped. Overall these data suggest that, while the automatic pilot can be influenced to some degree by top-down strategies and previous experience, the pre-potent response to correct an ongoing movement cannot be completely disengaged.
Collapse
|
98
|
Chouinard PA, Whitwell RL, Goodale MA. The lateral-occipital and the inferior-frontal cortex play different roles during the naming of visually presented objects. Hum Brain Mapp 2010; 30:3851-64. [PMID: 19441022 DOI: 10.1002/hbm.20812] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
We reasoned that if an area is devoted to processing only the visual features of objects, then transcranial magnetic stimulation (TMS) applied to this area in either hemisphere would affect the naming of objects presented in contralateral but not ipsilateral space. In contrast, if an area is involved in language, then one might expect to see effects of TMS when applied over the left but not the right hemisphere, regardless whether objects are in contralateral or ipsilateral space. Our experiments reveal two important findings. First, TMS delivered to the lateral-occipital complex (LOC), a visual-form area, affected the naming of objects presented in contralateral but not ipsilateral space, independent of which hemisphere was stimulated. In two additional experiments, when participants named the color of objects or made judgments about the size of stimuli as shown physically on a computer screen, TMS over the contralateral LOC did not affect color naming but did affect the participants' ability to make size judgments. Second, TMS delivered to the left but not the right posterior inferior-frontal gyrus (pIFG) affected the naming of objects irrespective of whether objects were presented in contralateral or ipsilateral space. In a separate experiment, when participants were asked to either read or categorize words, TMS over the left but not the right pIFG affected word categorization but not word reading. On the basis of these findings, we propose that when people name visually-presented objects, LOC processes the visual form of objects while the left pIFG processes the semantics of objects.
Collapse
|
99
|
Chapman CS, Goodale MA. Seeing all the obstacles in your way: the effect of visual feedback and visual feedback schedule on obstacle avoidance while reaching. Exp Brain Res 2009; 202:363-75. [DOI: 10.1007/s00221-009-2140-7] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2009] [Accepted: 12/11/2009] [Indexed: 11/28/2022]
|
100
|
Monaco S, Króliczak G, Quinlan DJ, Fattori P, Galletti C, Goodale MA, Culham JC. Contribution of visual and proprioceptive information to the precision of reaching movements. Exp Brain Res 2009; 202:15-32. [DOI: 10.1007/s00221-009-2106-9] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2009] [Accepted: 11/14/2009] [Indexed: 10/20/2022]
|