1
|
Monaco S, Menghi N, Crawford JD. Action-specific feature processing in the human cortex: An fMRI study. Neuropsychologia 2024; 194:108773. [PMID: 38142960 DOI: 10.1016/j.neuropsychologia.2023.108773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 11/29/2023] [Accepted: 12/18/2023] [Indexed: 12/26/2023]
Abstract
Sensorimotor integration involves feedforward and reentrant processing of sensory input. Grasp-related motor activity precedes and is thought to influence visual object processing. Yet, while the importance of reentrant feedback is well established in perception, the top-down modulations for action and the neural circuits involved in this process have received less attention. Do action-specific intentions influence the processing of visual information in the human cortex? Using a cue-separation fMRI paradigm, we found that action-specific instruction processing (manual alignment vs. grasp) became apparent only after the visual presentation of oriented stimuli, and occurred as early as in the primary visual cortex and extended to the dorsal visual stream, motor and premotor areas. Further, dorsal stream area aIPS, known to be involved in object manipulation, and the primary visual cortex showed task-related functional connectivity with frontal, parietal and temporal areas, consistent with the idea that reentrant feedback from dorsal and ventral visual stream areas modifies visual inputs to prepare for action. Importantly, both the task-dependent modulations and connections were linked specifically to the object presentation phase of the task, suggesting a role in processing the action goal. Our results show that intended manual actions have an early, pervasive, and differential influence on the cortical processing of vision.
Collapse
Affiliation(s)
- Simona Monaco
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Rovereto (TN), Italy.
| | - Nicholas Menghi
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - J Douglas Crawford
- Center for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada; Vision: Science to Applications (VISTA) Program, Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
2
|
Noviello S, Kamari Songhorabadi S, Deng Z, Zheng C, Chen J, Pisani A, Franchin E, Pierotti E, Tonolli E, Monaco S, Renoult L, Sperandio I. Temporal features of size constancy for perception and action in a real-world setting: A combined EEG-kinematics study. Neuropsychologia 2024; 193:108746. [PMID: 38081353 DOI: 10.1016/j.neuropsychologia.2023.108746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 11/23/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
A stable representation of object size, in spite of continuous variations in retinal input due to changes in viewing distance, is critical for perceiving and acting in a real 3D world. In fact, our perceptual and visuo-motor systems exhibit size and grip constancies in order to compensate for the natural shrinkage of the retinal image with increased distance. The neural basis of this size-distance scaling remains largely unknown, although multiple lines of evidence suggest that size-constancy operations might take place remarkably early, already at the level of the primary visual cortex. In this study, we examined for the first time the temporal dynamics of size constancy during perception and action by using a combined measurement of event-related potentials (ERPs) and kinematics. Participants were asked to maintain their gaze steadily on a fixation point and perform either a manual estimation or a grasping task towards disks of different sizes placed at different distances. Importantly, the physical size of the target was scaled with distance to yield a constant retinal angle. Meanwhile, we recorded EEG data from 64 scalp electrodes and hand movements with a motion capture system. We focused on the first positive-going visual evoked component peaking at approximately 90 ms after stimulus onset. We found earlier latencies and greater amplitudes in response to bigger than smaller disks of matched retinal size, regardless of the task. In line with the ERP results, manual estimates and peak grip apertures were larger for the bigger targets. We also found task-related differences at later stages of processing from a cluster of central electrodes, whereby the mean amplitude of the P2 component was greater for manual estimation than grasping. Taken together, these findings provide novel evidence that size constancy for real objects at real distances occurs at the earliest cortical stages and that early visual processing does not change as a function of task demands.
Collapse
Affiliation(s)
- Simona Noviello
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy
| | | | - Zhiqing Deng
- School of Psychology, South China Normal University, Guangzhou, Guangdong Province, China
| | - Chao Zheng
- School of Psychology, South China Normal University, Guangzhou, Guangdong Province, China
| | - Juan Chen
- School of Psychology, South China Normal University, Guangzhou, Guangdong Province, China; Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China
| | - Angelo Pisani
- Department of Psychology "Renzo Canestrari", University of Bologna, Italy
| | - Elena Franchin
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy
| | - Enrica Pierotti
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Elena Tonolli
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Simona Monaco
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Louis Renoult
- School of Psychology, University of East Anglia, Norwich, UK
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy.
| |
Collapse
|
3
|
Klein LK, Maiello G, Stubbs K, Proklova D, Chen J, Paulun VC, Culham JC, Fleming RW. Distinct Neural Components of Visually Guided Grasping during Planning and Execution. J Neurosci 2023; 43:8504-8514. [PMID: 37848285 PMCID: PMC10711727 DOI: 10.1523/jneurosci.0335-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 07/18/2023] [Accepted: 09/06/2023] [Indexed: 10/19/2023] Open
Abstract
Selecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor's body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here, we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants of either sex viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors, that is, grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral stream areas during grasp planning then in premotor regions during grasp execution. Object mass was encoded in ventral stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects.SIGNIFICANCE STATEMENT Grasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and, surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.
Collapse
Affiliation(s)
- Lina K Klein
- Department of Experimental Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
| | - Guido Maiello
- School of Psychology, University of Southampton, Southampton SO17 1PS, United Kingdom
| | - Kevin Stubbs
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Daria Proklova
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, 510631, China
- Key Laboratory of Brain, Cognition and Education Sciences, South China Normal University, Guangzhou 510631, China
| | - Vivian C Paulun
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University Giessen, Giessen, Germany, 35390
| |
Collapse
|
4
|
Threethipthikoon T, Li Z, Shigemasu H. Orientation representation in human visual cortices: contributions of non-visual information and action-related process. Front Psychol 2023; 14:1231109. [PMID: 38106392 PMCID: PMC10722153 DOI: 10.3389/fpsyg.2023.1231109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 11/15/2023] [Indexed: 12/19/2023] Open
Abstract
Orientation processing in the human brain plays a crucial role in guiding grasping actions toward an object. Remarkably, despite the absence of visual input, the human visual cortex can still process orientation information. Instead of visual input, non-visual information, including tactile and proprioceptive sensory input from the hand and arm, as well as feedback from action-related processes, may contribute to orientation processing. However, the precise mechanisms by which the visual cortices process orientation information in the context of non-visual sensory input and action-related processes remain to be elucidated. Thus, our study examined the orientation representation within the visual cortices by analyzing the blood-oxygenation-level-dependent (BOLD) signals under four action conditions: direct grasp (DG), air grasp (AG), non-grasp (NG), and uninformed grasp (UG). The images of the cylindrical object were shown at +45° or - 45° orientations, corresponding to those of the real object to be grasped with the whole-hand gesture. Participants judged their orientation under all conditions. Grasping was performed without online visual feedback of the hand and object. The purpose of this design was to investigate the visual areas under conditions involving tactile feedback, proprioception, and action-related processes. To address this, a multivariate pattern analysis was used to examine the differences among the cortical patterns of the four action conditions in orientation representation by classification. Overall, significant decoding accuracy over chance level was discovered for the DG; however, during AG, only the early visual areas showed significant accuracy, suggesting that the object's tactile feedback influences the orientation process in higher visual areas. The NG showed no statistical significance in any area, indicating that without the grasping action, visual input does not contribute to cortical pattern representation. Interestingly, only the dorsal and ventral divisions of the third visual area (V3d and V3v) showed significant decoding accuracy during the UG despite the absence of visual instructions, suggesting that the orientation representation was derived from action-related processes in V3d and visual recognition of object visualization in V3v. The processing of orientation information during non-visually guided grasping of objects relies on other non-visual sources and is specifically divided by the purpose of action or recognition.
Collapse
Affiliation(s)
| | - Zhen Li
- Guangdong Laboratory of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, Shenzhen, China
- Department of Engineering, Shenzhen MSU-BIT University, Shenzhen, China
| | | |
Collapse
|
5
|
Bola Ł, Vetter P, Wenger M, Amedi A. Decoding Reach Direction in Early "Visual" Cortex of Congenitally Blind Individuals. J Neurosci 2023; 43:7868-7878. [PMID: 37783506 PMCID: PMC10648511 DOI: 10.1523/jneurosci.0376-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 08/22/2023] [Accepted: 08/26/2023] [Indexed: 10/04/2023] Open
Abstract
Motor actions, such as reaching or grasping, can be decoded from fMRI activity of early visual cortex (EVC) in sighted humans. This effect can depend on vision or visual imagery, or alternatively, could be driven by mechanisms independent of visual experience. Here, we show that the actions of reaching in different directions can be reliably decoded from fMRI activity of EVC in congenitally blind humans (both sexes). Thus, neither visual experience nor visual imagery is necessary for EVC to represent action-related information. We also demonstrate that, within EVC of blind humans, the accuracy of reach direction decoding is highest in areas typically representing foveal vision and gradually decreases in areas typically representing peripheral vision. We propose that this might indicate the existence of a predictive, hard-wired mechanism of aligning action and visual spaces. This mechanism might send action-related information primarily to the high-resolution foveal visual areas, which are critical for guiding and online correction of motor actions. Finally, we show that, beyond EVC, the decoding of reach direction in blind humans is most accurate in dorsal stream areas known to be critical for visuo-spatial and visuo-motor integration in the sighted. Thus, these areas can develop space and action representations even in the lifelong absence of vision. Overall, our findings in congenitally blind humans match previous research on the action system in the sighted, and suggest that the development of action representations in the human brain might be largely independent of visual experience.SIGNIFICANCE STATEMENT Early visual cortex (EVC) was traditionally thought to process only visual signals from the retina. Recent studies proved this account incomplete, and showed EVC involvement in many activities not directly related to incoming visual information, such as memory, sound, or action processing. Is EVC involved in these activities because of visual imagery? Here, we show robust reach direction representation in EVC of humans born blind. This demonstrates that EVC can represent actions independently of vision and visual imagery. Beyond EVC, we found that reach direction representation in blind humans is strongest in dorsal brain areas, critical for action processing in the sighted. This suggests that the development of action representations in the human brain is largely independent of visual experience.
Collapse
Affiliation(s)
- Łukasz Bola
- Institute of Psychology, Polish Academy of Sciences, Warsaw, 00-378, Poland
| | - Petra Vetter
- Visual & Cognitive Neuroscience Lab, Department of Psychology, University of Fribourg, Fribourg, 1700, Switzerland
| | - Mohr Wenger
- Department of Medical Neurobiology, Faculty of Medicine, Hebrew University Jerusalem, Jerusalem, Israel, 91120
| | - Amir Amedi
- Department of Medical Neurobiology, Faculty of Medicine, Hebrew University Jerusalem, Jerusalem, Israel, 91120
- Baruch Ivcher Institute for Brain, Cognition & Technology, Baruch Ivcher School of Psychology, Reichman University, Interdisciplinary Center Herzliya, Herzliya, Israel, 461010
| |
Collapse
|
6
|
Chen J, Paciocco JU, Deng Z, Culham JC. Human Neuroimaging Reveals Differences in Activation and Connectivity between Real and Pantomimed Tool Use. J Neurosci 2023; 43:7853-7867. [PMID: 37722847 PMCID: PMC10648550 DOI: 10.1523/jneurosci.0068-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 09/09/2023] [Accepted: 09/11/2023] [Indexed: 09/20/2023] Open
Abstract
Because the sophistication of tool use is vastly enhanced in humans compared with other species, a rich understanding of its neural substrates requires neuroscientific experiments in humans. Although functional magnetic resonance imaging (fMRI) has enabled many studies of tool-related neural processing, surprisingly few studies have examined real tool use. Rather, because of the many constraints of fMRI, past research has typically used proxies such as pantomiming despite neuropsychological dissociations between pantomimed and real tool use. We compared univariate activation levels, multivariate activation patterns, and functional connectivity when participants used real tools (a plastic knife or fork) to act on a target object (scoring or poking a piece of putty) or pantomimed the same actions with similar movements and timing. During the Execute phase, we found higher activation for real versus pantomimed tool use in sensorimotor regions and the anterior supramarginal gyrus, and higher activation for pantomimed than real tool use in classic tool-selective areas. Although no regions showed significant differences in activation magnitude during the Plan phase, activation patterns differed between real versus pantomimed tool use and motor cortex showed differential functional connectivity. These results reflect important differences between real tool use, a closed-loop process constrained by real consequences, and pantomimed tool use, a symbolic gesture that requires conceptual knowledge of tools but with limited consequences. These results highlight the feasibility and added value of employing natural tool use tasks in functional imaging, inform neuropsychological dissociations, and advance our theoretical understanding of the neural substrates of natural tool use.SIGNIFICANCE STATEMENT The study of tool use offers unique insights into how the human brain synthesizes perceptual, cognitive, and sensorimotor functions to accomplish a goal. We suggest that the reliance on proxies, such as pantomiming, for real tool use has (1) overestimated the contribution of cognitive networks, because of the indirect, symbolic nature of pantomiming; and (2) underestimated the contribution of sensorimotor networks necessary for predicting and monitoring the consequences of real interactions between hand, tool, and the target object. These results enhance our theoretical understanding of the full range of human tool functions and inform our understanding of neuropsychological dissociations between real and pantomimed tool use.
Collapse
Affiliation(s)
- Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong 510631, China
- Key Laboratory of Brain, Cognition and Education Sciences, South China Normal University, Ministry of Education, Guangzhou, Guangdong 510631, China
| | - Joseph U Paciocco
- Neuroscience Program, University of Western Ontario, London, Ontario N6A 5B7, Canada
| | - Zhiqing Deng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Jody C Culham
- Neuroscience Program, University of Western Ontario, London, Ontario N6A 5B7, Canada
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5B7, Canada
| |
Collapse
|
7
|
Rens G, Figley TD, Gallivan JP, Liu Y, Culham JC. Grasping with a Twist: Dissociating Action Goals from Motor Actions in Human Frontoparietal Circuits. J Neurosci 2023; 43:5831-5847. [PMID: 37474309 PMCID: PMC10423047 DOI: 10.1523/jneurosci.0009-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 05/23/2023] [Accepted: 06/23/2023] [Indexed: 07/22/2023] Open
Abstract
In daily life, prehension is typically not the end goal of hand-object interactions but a precursor for manipulation. Nevertheless, functional MRI (fMRI) studies investigating manual manipulation have primarily relied on prehension as the end goal of an action. Here, we used slow event-related fMRI to investigate differences in neural activation patterns between prehension in isolation and prehension for object manipulation. Sixteen (seven males and nine females) participants were instructed either to simply grasp the handle of a rotatable dial (isolated prehension) or to grasp and turn it (prehension for object manipulation). We used representational similarity analysis (RSA) to investigate whether the experimental conditions could be discriminated from each other based on differences in task-related brain activation patterns. We also used temporal multivoxel pattern analysis (tMVPA) to examine the evolution of regional activation patterns over time. Importantly, we were able to differentiate isolated prehension and prehension for manipulation from activation patterns in the early visual cortex, the caudal intraparietal sulcus (cIPS), and the superior parietal lobule (SPL). Our findings indicate that object manipulation extends beyond the putative cortical grasping network (anterior intraparietal sulcus, premotor and motor cortices) to include the superior parietal lobule and early visual cortex.SIGNIFICANCE STATEMENT A simple act such as turning an oven dial requires not only that the CNS encode the initial state (starting dial orientation) of the object but also the appropriate posture to grasp it to achieve the desired end state (final dial orientation) and the motor commands to achieve that state. Using advanced temporal neuroimaging analysis techniques, we reveal how such actions unfold over time and how they differ between object manipulation (turning a dial) versus grasping alone. We find that a combination of brain areas implicated in visual processing and sensorimotor integration can distinguish between the complex and simple tasks during planning, with neural patterns that approximate those during the actual execution of the action.
Collapse
Affiliation(s)
- Guy Rens
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
- Laboratorium voor Neuro- en Psychofysiologie, Department of Neurosciences, Katholieke Universiteit Leuven, Leuven 3000, Belgium
- Leuven Brain Institute, Katholieke Universiteit Leuven, Leuven 3000, Belgium
| | - Teresa D Figley
- Graduate Program in Neuroscience, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Jason P Gallivan
- Departments of Psychology & Biomedical and Molecular Sciences, Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
| | - Yuqi Liu
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
- Institute of Neuroscience, Chinese Academy of Sciences Center for Excellence in Brain Sciences and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
- Graduate Program in Neuroscience, University of Western Ontario, London, Ontario N6A 5C2, Canada
| |
Collapse
|
8
|
Trentin C, Slagter HA, Olivers CNL. Visual working memory representations bias attention more when they are the target of an action plan. Cognition 2023; 230:105274. [PMID: 36113256 DOI: 10.1016/j.cognition.2022.105274] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 07/21/2022] [Accepted: 08/29/2022] [Indexed: 11/16/2022]
Abstract
Attention has frequently been regarded as an emergent property of linking sensory representations to action plans. It has recently been proposed that similar mechanisms may operate within visual working memory (VWM), such that linking an object in VWM to an action plan strengthens its sensory memory representation, which then expresses as an attentional bias. Here we directly tested this hypothesis by comparing attentional biases induced by VWM representations which were the target of a future action, to those induced by VWM representations that were equally task-relevant, but not the direct target of action. We predicted that the first condition would result in a more prioritized memory state and hence stronger attentional biases. Specifically, participants memorized a geometric shape for a subsequent memory test. At test, in case of a match, participants either had to perform a grip movement on the matching object (action condition), or perform the same movement, but on an unrelated object (control condition). To assess any attentional biases, during the delay period between memorandum and test, participants performed a visual selection task in which either the target was surrounded by the memorized shape (congruent trials) or a distractor (incongruent trials). Eye movements were measured as a proxy for attentional priority. We found a significant interaction for saccade latencies between action condition and shape congruency, reflecting more pronounced VWM-based attentional biases in the action condition. Our results are consistent with the idea that action plans prioritize sensory representations in VWM.
Collapse
Affiliation(s)
- Caterina Trentin
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, The Netherlands.
| | - Heleen A Slagter
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, The Netherlands
| | - Christian N L Olivers
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, The Netherlands
| |
Collapse
|
9
|
Sartin S, Ranzini M, Scarpazza C, Monaco S. Cortical areas involved in grasping and reaching actions with and without visual information: An ALE meta-analysis of neuroimaging studies. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 4:100070. [PMID: 36632448 PMCID: PMC9826890 DOI: 10.1016/j.crneur.2022.100070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 11/23/2022] [Accepted: 12/18/2022] [Indexed: 12/31/2022] Open
Abstract
The functional specialization of the ventral stream in Perception and the dorsal stream in Action is the cornerstone of the leading model proposed by Goodale and Milner in 1992. This model is based on neuropsychological evidence and has been a matter of debate for almost three decades, during which the dual-visual stream hypothesis has received much attention, including support and criticism. The advent of functional magnetic resonance imaging (fMRI) has allowed investigating the brain areas involved in Perception and Action, and provided useful data on the functional specialization of the two streams. Research on this topic has been quite prolific, yet no meta-analysis so far has explored the spatial convergence in the involvement of the two streams in Action. The present meta-analysis (N = 53 fMRI and PET studies) was designed to reveal the specific neural activations associated with Action (i.e., grasping and reaching movements), and the extent to which visual information affects the involvement of the two streams during motor control. Our results provide a comprehensive view of the consistent and spatially convergent neural correlates of Action based on neuroimaging studies conducted over the past two decades. In particular, occipital-temporal areas showed higher activation likelihood in the Vision compared to the No vision condition, but no difference between reach and grasp actions. Frontal-parietal areas were consistently involved in both reach and grasp actions regardless of visual availability. We discuss our results in light of the well-established dual-visual stream model and frame these findings in the context of recent discoveries obtained with advanced fMRI methods, such as multivoxel pattern analysis.
Collapse
Affiliation(s)
- Samantha Sartin
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Italy
| | | | - Cristina Scarpazza
- Department of General Psychology, University of Padua, Italy,IRCCS San Camillo Hospital, Venice, Italy
| | - Simona Monaco
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Italy,Corresponding author. CIMeC - Center for Mind/Brain Sciences, University of Trento, Via delle Regole 101, 38123, Trento, Italy.
| |
Collapse
|
10
|
Velji-Ibrahim J, Crawford JD, Cattaneo L, Monaco S. Action planning modulates the representation of object features in human fronto-parietal and occipital cortex. Eur J Neurosci 2022; 56:4803-4818. [PMID: 35841138 PMCID: PMC9545676 DOI: 10.1111/ejn.15776] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 05/19/2022] [Accepted: 06/09/2022] [Indexed: 11/27/2022]
Abstract
The visual cortex has been extensively studied to investigate its role in object recognition but to a lesser degree to determine how action planning influences the representation of objects' features. We used functional MRI and pattern classification methods to determine if during action planning, object features (orientation and location) could be decoded in an action‐dependent way. Sixteen human participants used their right dominant hand to perform movements (Align or Open reach) towards one of two 3D‐real oriented objects that were simultaneously presented and placed on either side of a fixation cross. While both movements required aiming towards target location, Align but not Open reach movements required participants to precisely adjust hand orientation. Therefore, we hypothesized that if the representation of object features is modulated by the upcoming action, pre‐movement activity pattern would allow more accurate dissociation between object features in Align than Open reach tasks. We found such dissociation in the anterior and posterior parietal cortex, as well as in the dorsal premotor cortex, suggesting that visuomotor processing is modulated by the upcoming task. The early visual cortex showed significant decoding accuracy for the dissociation between object features in the Align but not Open reach task. However, there was no significant difference between the decoding accuracy in the two tasks. These results demonstrate that movement‐specific preparatory signals modulate object representation in the frontal and parietal cortex, and to a lesser extent in the early visual cortex, likely through feedback functional connections.
Collapse
Affiliation(s)
- Jena Velji-Ibrahim
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Trento, Italy.,Center for Vision Research, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Center for Vision Research, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, Toronto, Ontario, Canada.,Departments of Biology and Psychology, York University, Toronto, Ontario, Canada
| | - Luigi Cattaneo
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Simona Monaco
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| |
Collapse
|
11
|
Alipour A, Beggs JM, Brown JW, James TW. A computational examination of the two-streams hypothesis: which pathway needs a longer memory? Cogn Neurodyn 2022; 16:149-165. [PMID: 35126775 PMCID: PMC8807798 DOI: 10.1007/s11571-021-09703-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 06/26/2021] [Accepted: 07/14/2021] [Indexed: 02/03/2023] Open
Abstract
The two visual streams hypothesis is a robust example of neural functional specialization that has inspired countless studies over the past four decades. According to one prominent version of the theory, the fundamental goal of the dorsal visual pathway is the transformation of retinal information for visually-guided motor behavior. To that end, the dorsal stream processes input using absolute (or veridical) metrics only when the movement is initiated, necessitating very little, or no, memory. Conversely, because the ventral visual pathway does not involve motor behavior (its output does not influence the real world), the ventral stream processes input using relative (or illusory) metrics and can accumulate or integrate sensory evidence over long time constants, which provides a substantial capacity for memory. In this study, we tested these relations between functional specialization, processing metrics, and memory by training identical recurrent neural networks to perform either a viewpoint-invariant object classification task or an orientation/size determination task. The former task relies on relative metrics, benefits from accumulating sensory evidence, and is usually attributed to the ventral stream. The latter task relies on absolute metrics, can be computed accurately in the moment, and is usually attributed to the dorsal stream. To quantify the amount of memory required for each task, we chose two types of neural network models. Using a long-short-term memory (LSTM) recurrent network, we found that viewpoint-invariant object categorization (object task) required a longer memory than orientation/size determination (orientation task). Additionally, to dissect this memory effect, we considered factors that contributed to longer memory in object tasks. First, we used two different sets of objects, one with self-occlusion of features and one without. Second, we defined object classes either strictly by visual feature similarity or (more liberally) by semantic label. The models required greater memory when features were self-occluded and when object classes were defined by visual feature similarity, showing that self-occlusion and visual similarity among object task samples are contributing to having a long memory. The same set of tasks modeled using modified leaky-integrator echo state recurrent networks (LiESN), however, did not replicate the results, except under some conditions. This may be because LiESNs cannot perform fine-grained memory adjustments due to their network-wide memory coefficient and fixed recurrent weights. In sum, the LSTM simulations suggest that longer memory is advantageous for performing viewpoint-invariant object classification (a putative ventral stream function) because it allows for interpolation of features across viewpoints. The results further suggest that orientation/size determination (a putative dorsal stream function) does not benefit from longer memory. These findings are consistent with the two visual streams theory of functional specialization. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11571-021-09703-z.
Collapse
Affiliation(s)
- Abolfazl Alipour
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - John M Beggs
- Program in Neuroscience, Indiana University, Bloomington, IN USA
- Department of Physics, Indiana University, Bloomington, IN USA
| | - Joshua W Brown
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - Thomas W James
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| |
Collapse
|
12
|
The contributions of the ventral and the dorsal visual streams to the automatic processing of action relations of familiar and unfamiliar object pairs. Neuroimage 2021. [DOI: 10.1016/j.neuroimage.2021.118629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
13
|
Guo LL, Oghli YS, Frost A, Niemeier M. Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control. J Neurosci 2021; 41:9210-9222. [PMID: 34551938 PMCID: PMC8570828 DOI: 10.1523/jneurosci.0992-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 09/13/2021] [Accepted: 09/16/2021] [Indexed: 11/21/2022] Open
Abstract
Current understanding of the neural processes underlying human grasping suggests that grasp computations involve gradients of higher to lower level representations and, relatedly, visual to motor processes. However, it is unclear whether these processes evolve in a strictly canonical manner from higher to intermediate and to lower levels given that this knowledge importantly relies on functional imaging, which lacks temporal resolution. To examine grasping in fine temporal detail here we used multivariate EEG analysis. We asked participants to grasp objects while controlling the time at which crucial elements of grasp programs were specified. We first specified the orientation with which participants should grasp objects, and only after a delay we instructed participants about which effector to use to grasp, either the right or the left hand. We also asked participants to grasp with both hands because bimanual and left-hand grasping share intermediate-level grasp representations. We observed that grasp programs evolved in a canonical manner from visual representations, which were independent of effectors to motor representations that distinguished between effectors. However, we found that intermediate representations of effectors that partially distinguished between effectors arose after representations that distinguished among all effector types. Our results show that grasp computations do not proceed in a strictly hierarchically canonical fashion, highlighting the importance of the fine temporal resolution of EEG for a comprehensive understanding of human grasp control.SIGNIFICANCE STATEMENT A long-standing assumption of the grasp computations is that grasp representations progress from higher to lower level control in a regular, or canonical, fashion. Here, we combined EEG and multivariate pattern analysis to characterize the temporal dynamics of grasp representations while participants viewed objects and were subsequently cued to execute an unimanual or bimanual grasp. Interrogation of the temporal dynamics revealed that lower level effector representations emerged before intermediate levels of grasp representations, thereby suggesting a partially noncanonical progression from higher to lower and then to intermediate level grasp control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Yazan Shamli Oghli
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Adam Frost
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N 3M6, Canada
- Vision: Science to Applications, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
14
|
Knights E, Mansfield C, Tonin D, Saada J, Smith FW, Rossit S. Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions. J Neurosci 2021; 41:5263-5273. [PMID: 33972399 PMCID: PMC8211542 DOI: 10.1523/jneurosci.0083-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 03/23/2021] [Accepted: 03/29/2021] [Indexed: 02/02/2023] Open
Abstract
Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipitotemporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools (N = 20; 9 females). Using real-action fMRI and multivoxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is grasped appropriately for use) were decodable from hand-selective areas in occipitotemporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control nontools. In addition, grasp typicality decoding was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naive to object category (tool vs nontools). Finding a specificity for typical tool grasping in hand-selective, rather than tool-selective, regions challenges the long-standing assumption that activation for viewing tool images reflects sensorimotor processing linked to tool manipulation. Instead, our results show that typicality representations for tool grasping are automatically evoked in visual regions specialized for representing the human hand, the primary tool of the brain for interacting with the world.
Collapse
Affiliation(s)
- Ethan Knights
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom
| | - Courtney Mansfield
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Diana Tonin
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Janak Saada
- Department of Radiology, Norfolk and Norwich University Hospitals NHS Foundation Trust, Norwich NR4 7UY, United Kingdom
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Stéphanie Rossit
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| |
Collapse
|
15
|
Snow JC, Culham JC. The Treachery of Images: How Realism Influences Brain and Behavior. Trends Cogn Sci 2021; 25:506-519. [PMID: 33775583 PMCID: PMC10149139 DOI: 10.1016/j.tics.2021.02.008] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/22/2021] [Indexed: 10/21/2022]
Abstract
Although the cognitive sciences aim to ultimately understand behavior and brain function in the real world, for historical and practical reasons, the field has relied heavily on artificial stimuli, typically pictures. We review a growing body of evidence that both behavior and brain function differ between image proxies and real, tangible objects. We also propose a new framework for immersive neuroscience to combine two approaches: (i) the traditional build-up approach of gradually combining simplified stimuli, tasks, and processes; and (ii) a newer tear-down approach that begins with reality and compelling simulations such as virtual reality to determine which elements critically affect behavior and brain processing.
Collapse
Affiliation(s)
- Jacqueline C Snow
- Department of Psychology, University of Nevada Reno, Reno, NV 89557, USA
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, N6A 5C2, Canada; Brain and Mind Institute, Western Interdisciplinary Research Building, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| |
Collapse
|
16
|
Boettcher SEP, Gresch D, Nobre AC, van Ede F. Output planning at the input stage in visual working memory. SCIENCE ADVANCES 2021; 7:eabe8212. [PMID: 33762341 PMCID: PMC7990334 DOI: 10.1126/sciadv.abe8212] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 02/04/2021] [Indexed: 06/01/2023]
Abstract
Working memory serves as the buffer between past sensations and future behavior, making it vital to understand not only how we encode and retain sensory information in memory but also how we plan for its upcoming use. We ask when prospective action goals emerge alongside the encoding and retention of visual information in working memory. We show that prospective action plans do not emerge gradually during memory delays but are brought into memory early, in tandem with sensory encoding. This action encoding (i) precedes a second stage of action preparation that adapts to the time of expected memory utilization, (ii) occurs even ahead of an intervening motor task, and (iii) predicts visual memory-guided behavior several seconds later. By bringing prospective action plans into working memory at an early stage, the brain creates a dual (visual-motor) memory code that can make memories more effective and robust for serving ensuing behavior.
Collapse
Affiliation(s)
- Sage E P Boettcher
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Daniela Gresch
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Department of Experimental Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Anna C Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Netherlands
| |
Collapse
|
17
|
Decoding motor imagery and action planning in the early visual cortex: Overlapping but distinct neural mechanisms. Neuroimage 2020; 218:116981. [DOI: 10.1016/j.neuroimage.2020.116981] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 05/18/2020] [Accepted: 05/19/2020] [Indexed: 11/22/2022] Open
|
18
|
Abstract
Working memory bridges perception to action over extended delays, enabling flexible goal-directed behaviour. To date, studies of visual working memory – concerned with detailed visual representations such as shape and colour – have considered visual memory predominantly in the context of visual task demands, such as visual identification and search. Another key purpose of visual working memory is to directly inform and guide upcoming actions. Taking this as a starting point, I review emerging evidence for the pervasive bi-directional links between visual working memory and (planned) action, and discuss these links from the perspective of their common goal of enabling flexible and precise behaviour.
Collapse
Affiliation(s)
- Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
19
|
Parietal Cortex Integrates Saccade and Object Orientation Signals to Update Grasp Plans. J Neurosci 2020; 40:4525-4535. [PMID: 32354854 DOI: 10.1523/jneurosci.0300-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 04/20/2020] [Accepted: 04/21/2020] [Indexed: 11/21/2022] Open
Abstract
Coordinated reach-to-grasp movements are often accompanied by rapid eye movements (saccades) that displace the desired object image relative to the retina. Parietal cortex compensates for this by updating reach goals relative to current gaze direction, but its role in the integration of oculomotor and visual orientation signals for updating grasp plans is unknown. Based on a recent perceptual experiment, we hypothesized that inferior parietal cortex (specifically supramarginal gyrus [SMG]) integrates saccade and visual signals to update grasp plans in additional intraparietal/superior parietal regions. To test this hypothesis in humans (7 females, 6 males), we used a functional magnetic resonance paradigm, where saccades sometimes interrupted grasp preparation toward a briefly presented object that later reappeared (with the same/different orientation) just before movement. Right SMG and several parietal grasp regions, namely, left anterior intraparietal sulcus and bilateral superior parietal lobule, met our criteria for transsaccadic orientation integration: they showed task-dependent saccade modulations and, during grasp execution, they were specifically sensitive to changes in object orientation that followed saccades. Finally, SMG showed enhanced functional connectivity with both prefrontal saccade regions (consistent with oculomotor input) and anterior intraparietal sulcus/superior parietal lobule (consistent with sensorimotor output). These results support the general role of parietal cortex for the integration of visuospatial perturbations, and provide specific cortical modules for the integration of oculomotor and visual signals for grasp updating.SIGNIFICANCE STATEMENT How does the brain simultaneously compensate for both external and internally driven changes in visual input? For example, how do we grasp an unstable object while eye movements are simultaneously changing its retinal location? Here, we used fMRI to identify a group of inferior parietal (supramarginal gyrus) and superior parietal (intraparietal and superior parietal) regions that show saccade-specific modulations during unexpected changes in object/grasp orientation, and functional connectivity with frontal cortex saccade centers. This provides a network, complementary to the reach goal updater, that integrates visuospatial updating into grasp plans, and may help to explain some of the more complex symptoms associated with parietal damage, such as constructional ataxia.
Collapse
|
20
|
Rao N, Parikh PJ. Fluctuations in Human Corticospinal Activity Prior to Grasp. Front Syst Neurosci 2019; 13:77. [PMID: 31920572 PMCID: PMC6933951 DOI: 10.3389/fnsys.2019.00077] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 11/29/2019] [Indexed: 12/31/2022] Open
Abstract
Neuronal firing rate variability prior to movement onset contributes to trial-to-trial variability in primate behavior. However, in humans, whether similar mechanisms contribute to trial-to-trial behavioral variability remains unknown. We investigated the time-course of trial-to-trial variability in corticospinal excitability (CSE) using transcranial magnetic stimulation (TMS) during a self-paced reach-to-grasp task. We hypothesized that CSE variability will be modulated prior to the initiation of reach and that such a modulation would explain trial-to-trial behavioral variability. Able-bodied individuals were visually cued to plan their grip force before exertion of either 30% or 5% of their maximum pinch force capacity on an object. TMS was delivered at six time points (0.5, 0.75, 1, 1.1, 1.2, and 1.3 s) following a visual cue that instructed the force level. We first modeled the relation between CSE magnitude and its variability at rest (n = 12) to study the component of CSE variability pertaining to the task but not related to changes in CSE magnitude (n = 12). We found an increase in CSE variability from 1.2 to 1.3 s following the visual cue at 30% but not at 5% of force. This effect was temporally dissociated from the decrease in CSE magnitude that was observed from 0.5 to 0.75 s following the cue. Importantly, the increase in CSE variability explained at least ∼40% of inter-individual differences in trial-to-trial variability in time to peak force rate. These results were found to be repeatable across studies and robust to different analysis methods. Our findings suggest that the neural mechanisms underlying modulation in CSE variability and CSE magnitude are distinct. Notably, the extent of modulation in variability in corticospinal system prior to grasp within individuals may explain their trial-to-trial behavioral variability.
Collapse
Affiliation(s)
| | - Pranav J. Parikh
- Center for Neuromotor and Biomechanics Research, Department of Health and Human Performance, University of Houston, Houston, TX, United States
| |
Collapse
|
21
|
Multivariate Analysis of Electrophysiological Signals Reveals the Temporal Properties of Visuomotor Computations for Precision Grips. J Neurosci 2019; 39:9585-9597. [PMID: 31628180 DOI: 10.1523/jneurosci.0914-19.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 10/08/2019] [Accepted: 10/15/2019] [Indexed: 11/21/2022] Open
Abstract
The frontoparietal networks underlying grasping movements have been extensively studied, especially using fMRI. Accordingly, whereas much is known about their cortical locus much less is known about the temporal dynamics of visuomotor transformations. Here, we show that multivariate EEG analysis allows for detailed insights into the time course of visual and visuomotor computations of precision grasps. Male and female human participants first previewed one of several objects and, upon its reappearance, reached to grasp it with the thumb and index finger along one of its two symmetry axes. Object shape classifiers reached transient accuracies of 70% at ∼105 ms, especially based on scalp sites over visual cortex, dropping to lower levels thereafter. Grasp orientation classifiers relied on a system of occipital-to-frontal electrodes. Their accuracy rose concurrently with shape classification but ramped up more gradually, and the slope of the classification curve predicted individual reaction times. Further, cross-temporal generalization revealed that dynamic shape representation involved early and late neural generators that reactivated one another. In contrast, grasp computations involved a chain of generators attaining a sustained state about 100 ms before movement onset. Our results reveal the progression of visual and visuomotor representations over the course of planning and executing grasp movements.SIGNIFICANCE STATEMENT Grasping an object requires the brain to perform visual-to-motor transformations of the object's properties. Although much of the neuroanatomic basis of visuomotor transformations has been uncovered, little is known about its time course. Here, we orthogonally manipulated object visual characteristics and grasp orientation, and used multivariate EEG analysis to reveal that visual and visuomotor computations follow similar time courses but display different properties and dynamics.
Collapse
|
22
|
Blouin J, Saradjian AH, Pialasse JP, Manson GA, Mouchnino L, Simoneau M. Two Neural Circuits to Point Towards Home Position After Passive Body Displacements. Front Neural Circuits 2019; 13:70. [PMID: 31736717 PMCID: PMC6831616 DOI: 10.3389/fncir.2019.00070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 10/15/2019] [Indexed: 12/02/2022] Open
Abstract
A challenge in motor control research is to understand the mechanisms underlying the transformation of sensory information into arm motor commands. Here, we investigated these transformation mechanisms for movements whose targets were defined by information issued from body rotations in the dark (i.e., idiothetic information). Immediately after being rotated, participants reproduced the amplitude of their perceived rotation using their arm (Experiment 1). The cortical activation during movement planning was analyzed using electroencephalography and source analyses. Task-related activities were found in regions of interest (ROIs) located in the prefrontal cortex (PFC), dorsal premotor cortex, dorsal region of the anterior cingulate cortex (ACC) and the sensorimotor cortex. Importantly, critical regions for the cognitive encoding of space did not show significant task-related activities. These results suggest that arm movements were planned using a sensorimotor-type of spatial representation. However, when a 8 s delay was introduced between body rotation and the arm movement (Experiment 2), we found that areas involved in the cognitive encoding of space [e.g., ventral premotor cortex (vPM), rostral ACC, inferior and superior posterior parietal cortex (PPC)] showed task-related activities. Overall, our results suggest that the use of a cognitive-type of representation for planning arm movement after body motion is necessary when relevant spatial information must be stored before triggering the movement.
Collapse
Affiliation(s)
- Jean Blouin
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Anahid H Saradjian
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | | | - Gerome A Manson
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France.,Centre for Motor Control, University of Toronto, Toronto, ON, Canada
| | - Laurence Mouchnino
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Martin Simoneau
- Faculté de Médecine, Département de Kinésiologie, Université Laval, Québec, QC, Canada.,Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, QC, Canada
| |
Collapse
|
23
|
Predictive coding of action intentions in dorsal and ventral visual stream is based on visual anticipations, memory-based information and motor preparation. Brain Struct Funct 2019; 224:3291-3308. [PMID: 31673774 DOI: 10.1007/s00429-019-01970-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Accepted: 10/16/2019] [Indexed: 10/25/2022]
Abstract
Predictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in the absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available and vice versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.
Collapse
|
24
|
Blohm G, Alikhanian H, Gaetz W, Goltz H, DeSouza J, Cheyne D, Crawford J. Neuromagnetic signatures of the spatiotemporal transformation for manual pointing. Neuroimage 2019; 197:306-319. [DOI: 10.1016/j.neuroimage.2019.04.074] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 03/28/2019] [Accepted: 04/27/2019] [Indexed: 11/29/2022] Open
|
25
|
Styrkowiec PP, Nowik AM, Króliczak G. The neural underpinnings of haptically guided functional grasping of tools: An fMRI study. Neuroimage 2019; 194:149-162. [DOI: 10.1016/j.neuroimage.2019.03.043] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 01/26/2019] [Accepted: 03/19/2019] [Indexed: 10/27/2022] Open
|
26
|
Lavoie EB, Valevicius AM, Boser QA, Kovic O, Vette AH, Pilarski PM, Hebert JS, Chapman CS. Using synchronized eye and motion tracking to determine high-precision eye-movement patterns during object-interaction tasks. J Vis 2018; 18:18. [DOI: 10.1167/18.6.18] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Ewen B. Lavoie
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| | - Aïda M. Valevicius
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Quinn A. Boser
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Ognjen Kovic
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Albert H. Vette
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
- Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta, Canada
- Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, Alberta, Canada
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
| | - Patrick M. Pilarski
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Jacqueline S. Hebert
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
- Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, Alberta, Canada
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
| | - Craig S. Chapman
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
27
|
Two visual pathways – Where have they taken us and where will they lead in future? Cortex 2018; 98:283-292. [DOI: 10.1016/j.cortex.2017.12.002] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 12/05/2017] [Indexed: 01/05/2023]
|
28
|
Freud E, Macdonald SN, Chen J, Quinlan DJ, Goodale MA, Culham JC. Getting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations. Cortex 2018; 98:34-48. [DOI: 10.1016/j.cortex.2017.02.020] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2016] [Revised: 12/07/2016] [Accepted: 02/24/2017] [Indexed: 10/19/2022]
|
29
|
Schenk T, Hesse C. Do we have distinct systems for immediate and delayed actions? A selective review on the role of visual memory in action. Cortex 2018; 98:228-248. [DOI: 10.1016/j.cortex.2017.05.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 05/01/2017] [Accepted: 05/11/2017] [Indexed: 10/19/2022]
|
30
|
Ariani G, Oosterhof NN, Lingnau A. Time-resolved decoding of planned delayed and immediate prehension movements. Cortex 2017; 99:330-345. [PMID: 29334647 DOI: 10.1016/j.cortex.2017.12.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 10/20/2017] [Accepted: 12/11/2017] [Indexed: 01/20/2023]
Abstract
Different contexts require us either to react immediately, or to delay (or suppress) a planned movement. Previous studies that aimed at decoding movement plans typically dissociated movement preparation and execution by means of delayed-movement paradigms. Here we asked whether these results can be generalized to the planning and execution of immediate movements. To directly compare delayed, non-delayed, and suppressed reaching and grasping movements, we used a slow event-related functional magnetic resonance imaging (fMRI) design. To examine how neural representations evolved throughout movement planning, execution, and suppression, we performed time-resolved multivariate pattern analysis (MVPA). During the planning phase, we were able to decode upcoming reaching and grasping movements in contralateral parietal and premotor areas. During the execution phase, we were able to decode movements in a widespread bilateral network of motor, premotor, and somatosensory areas. Moreover, we obtained significant decoding across delayed and non-delayed movement plans in contralateral primary motor cortex. Our results demonstrate the feasibility of time-resolved MVPA and provide new insights into the dynamics of the prehension network, suggesting early neural representations of movement plans in the primary motor cortex that are shared between delayed and non-delayed contexts.
Collapse
Affiliation(s)
- Giacomo Ariani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| | | | - Angelika Lingnau
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology & Cognitive Science, University of Trento, Italy; Department of Psychology, Royal Holloway University of London, United Kingdom
| |
Collapse
|
31
|
Recruitment of Foveal Retinotopic Cortex During Haptic Exploration of Shapes and Actions in the Dark. J Neurosci 2017; 37:11572-11591. [PMID: 29066555 DOI: 10.1523/jneurosci.2428-16.2017] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2016] [Revised: 10/05/2017] [Indexed: 12/23/2022] Open
Abstract
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay.SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it.
Collapse
|
32
|
Milner AD. How do the two visual streams interact with each other? Exp Brain Res 2017; 235:1297-1308. [PMID: 28255843 PMCID: PMC5380689 DOI: 10.1007/s00221-017-4917-4] [Citation(s) in RCA: 110] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Accepted: 02/13/2017] [Indexed: 11/28/2022]
Abstract
The current consensus divides primate cortical visual processing into two broad networks or "streams" composed of highly interconnected areas (Milner and Goodale 2006, 2008; Goodale 2014). The ventral stream, passing from primary visual cortex (V1) through to inferior parts of the temporal lobe, is considered to mediate the transformation of the contents of the visual signal into the mental furniture that guides memory, recognition and conscious perception. In contrast the dorsal stream, passing from V1 through to various areas in the posterior parietal lobe, is generally considered to mediate the visual guidance of action, primarily in real time. The brain, however, does not work through mutually insulated subsystems, and indeed there are well-documented interconnections between the two streams. Evidence for contributions from ventral stream systems to the dorsal stream comes from human neuropsychological and neuroimaging research, and indicates a crucial role in mediating complex and flexible visuomotor skills. Complementary evidence points to a role for posterior dorsal-stream visual analysis in certain aspects of 3-D perceptual function in the ventral stream. A series of studies of a patient with visual form agnosia has been instrumental in shaping our knowledge of what each stream can achieve in isolation; but it has also helped us to tease apart the relative dependence of parietal visuomotor systems on direct bottom-up visual inputs versus inputs redirected via perceptual systems within the ventral stream.
Collapse
Affiliation(s)
- A D Milner
- Durham University, Durham, UK.
- Department of Psychology, Science Laboratories, Durham University, South Road, Durham, DH1 3LE, UK.
| |
Collapse
|
33
|
Desmarais G, Lane B, LeBlanc KA, Hiltz J, Richards ED. What’s in a name? The influence of verbal labels on action production in novel object/action associations. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1308451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
| | - Breanna Lane
- Department of Psychology, Memorial University of Newfoundland, St. John's, Canada
| | - Kevin A. LeBlanc
- Department of Psychology and Neuroscience, Dalhousie University, Halifax Canada
| | - Justin Hiltz
- Department of Psychology, Mount Allison University, Sackville, Canada
| | - Eric D. Richards
- Department of Psychology, Mount Allison University, Sackville, Canada
| |
Collapse
|
34
|
Borra E, Gerbella M, Rozzi S, Luppino G. The macaque lateral grasping network: A neural substrate for generating purposeful hand actions. Neurosci Biobehav Rev 2017; 75:65-90. [DOI: 10.1016/j.neubiorev.2017.01.017] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 12/22/2016] [Accepted: 01/12/2017] [Indexed: 10/20/2022]
|
35
|
Sayegh PF, Gorbet DJ, Hawkins KM, Hoffman KL, Sergio LE. The Contribution of Different Cortical Regions to the Control of Spatially Decoupled Eye-Hand Coordination. J Cogn Neurosci 2017; 29:1194-1211. [PMID: 28253075 DOI: 10.1162/jocn_a_01111] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Our brain's ability to flexibly control the communication between the eyes and the hand allows for our successful interaction with the objects located within our environment. This flexibility has been observed in the pattern of neural responses within key regions of the frontoparietal reach network. More specifically, our group has shown how single-unit and oscillatory activity within the dorsal premotor cortex (PMd) and the superior parietal lobule (SPL) change contingent on the level of visuomotor compatibility between the eyes and hand. Reaches that involve a coupling between the eyes and hand toward a common spatial target display a pattern of neural responses that differ from reaches that require eye-hand decoupling. Although previous work examined the altered spiking and oscillatory activity that occurs during different types of eye-hand compatibilities, they did not address how each of these measures of neurological activity interacts with one another. Thus, in an effort to fully characterize the relationship between oscillatory and single-unit activity during different types of eye-hand coordination, we measured the spike-field coherence (SFC) within regions of macaque SPL and PMd. We observed stronger SFC within PMdr and superficial regions of SPL (areas 5/PEc) during decoupled reaches, whereas PMdc and regions within SPL surrounding medial intrapareital sulcus had stronger SFC during coupled reaches. These results were supported by meta-analysis on human fMRI data. Our results support the proposal of altered cortical control during complex eye-hand coordination and highlight the necessity to account for the different eye-hand compatibilities in motor control research.
Collapse
Affiliation(s)
| | - Diana J Gorbet
- 1 York University, Toronto, Ontario, Canada.,2 Canadian Action and Perception Network, Toronto, Ontario, Canada
| | | | - Kari L Hoffman
- 1 York University, Toronto, Ontario, Canada.,2 Canadian Action and Perception Network, Toronto, Ontario, Canada
| | - Lauren E Sergio
- 1 York University, Toronto, Ontario, Canada.,2 Canadian Action and Perception Network, Toronto, Ontario, Canada
| |
Collapse
|
36
|
Janssen P, Verhoef BE, Premereur E. Functional interactions between the macaque dorsal and ventral visual pathways during three-dimensional object vision. Cortex 2017; 98:218-227. [PMID: 28258716 DOI: 10.1016/j.cortex.2017.01.021] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Revised: 01/23/2017] [Accepted: 01/25/2017] [Indexed: 11/18/2022]
Abstract
The division of labor between the dorsal and the ventral visual stream in the primate brain has inspired numerous studies on the visual system in humans and in nonhuman primates. However, how and under which circumstances the two visual streams interact is still poorly understood. Here we review evidence from anatomy, modelling, electrophysiology, electrical microstimulation (EM), reversible inactivation and functional imaging in the macaque monkey aimed at clarifying at which levels in the hierarchy of visual areas the two streams interact, and what type of information might be exchanged between the two streams during three-dimensional (3D) object viewing. Neurons in both streams encode 3D structure from binocular disparity, synchronized activity between parietal and inferotemporal areas is present during 3D structure categorization, and clusters of 3D structure-selective neurons in parietal cortex are anatomically connected to ventral stream areas. In addition, caudal intraparietal cortex exerts a causal influence on 3D-structure related activations in more anterior parietal cortex and in inferotemporal cortex. Thus, both anatomical and functional evidence indicates that the dorsal and the ventral visual stream interact during 3D object viewing.
Collapse
Affiliation(s)
- Peter Janssen
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Leuven, Belgium.
| | - Bram-Ernst Verhoef
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Leuven, Belgium; Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA
| | - Elsie Premereur
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Leuven, Belgium
| |
Collapse
|
37
|
Semantic and pragmatic integration in vision for action. Conscious Cogn 2017; 48:40-54. [DOI: 10.1016/j.concog.2016.10.009] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Revised: 10/10/2016] [Accepted: 10/23/2016] [Indexed: 11/18/2022]
|
38
|
Kastner S, Chen Q, Jeong SK, Mruczek REB. A brief comparative review of primate posterior parietal cortex: A novel hypothesis on the human toolmaker. Neuropsychologia 2017; 105:123-134. [PMID: 28159617 DOI: 10.1016/j.neuropsychologia.2017.01.034] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 01/26/2017] [Accepted: 01/30/2017] [Indexed: 10/20/2022]
Abstract
The primate visual system contains two major cortical pathways: a ventral-temporal pathway that has been associated with object processing and recognition, and a dorsal-parietal pathway that has been associated with spatial processing and action guidance. Our understanding of the role of the dorsal pathway, in particular, has greatly evolved within the framework of the two-pathway hypothesis since its original conception. Here, we present a comparative review of the primate dorsal pathway in humans and monkeys based on electrophysiological, neuroimaging, neuropsychological, and neuroanatomical studies. We consider similarities and differences across species in terms of the topographic representation of visual space; specificity for eye, reaching, or grasping movements; multi-modal response properties; and the representation of objects and tools. We also review the relative anatomical location of functionally- and topographically-defined regions of the posterior parietal cortex. An emerging theme from this comparative analysis is that non-spatial information is represented to a greater degree, and with increased complexity, in the human dorsal visual system. We propose that non-spatial information in the primate parietal cortex contributes to the perception-to-action system aimed at manipulating objects in peripersonal space. In humans, this network has expanded in multiple ways, including the development of a dorsal object vision system mirroring the complexity of the ventral stream, the integration of object information with parietal working memory systems, and the emergence of tool-specific object representations in the anterior intraparietal sulcus and regions of the inferior parietal lobe. We propose that these evolutionary changes have enabled the emergence of human-specific behaviors, such as the sophisticated use of tools.
Collapse
Affiliation(s)
- S Kastner
- Department of Psychology, USA; Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA.
| | - Q Chen
- Department of Psychology, USA; School of Psychology, South China Normal University, Guangzhou 510631, China
| | - S K Jeong
- Department of Psychology, USA; Korea Brain Research Institute, Daegu, South Korea
| | - R E B Mruczek
- Department of Psychology, Worcester State University, Worcester, MA 01520, USA
| |
Collapse
|
39
|
Studying the neural bases of prism adaptation using fMRI: A technical and design challenge. Behav Res Methods 2016; 49:2031-2043. [DOI: 10.3758/s13428-016-0840-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
40
|
Preferential coding of eye/hand motor actions in the human ventral occipito-temporal cortex. Neuropsychologia 2016; 93:116-127. [DOI: 10.1016/j.neuropsychologia.2016.10.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Revised: 09/21/2016] [Accepted: 10/14/2016] [Indexed: 01/23/2023]
|
41
|
Ajina S, Bridge H. Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System. Neuroscientist 2016; 23:529-541. [PMID: 27777337 DOI: 10.1177/1073858416673817] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.
Collapse
Affiliation(s)
- Sara Ajina
- 1 Oxford Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Holly Bridge
- 1 Oxford Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| |
Collapse
|
42
|
Cappadocia DC, Monaco S, Chen Y, Blohm G, Crawford JD. Temporal Evolution of Target Representation, Movement Direction Planning, and Reach Execution in Occipital–Parietal–Frontal Cortex: An fMRI Study. Cereb Cortex 2016; 27:5242-5260. [DOI: 10.1093/cercor/bhw304] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Accepted: 09/08/2016] [Indexed: 11/14/2022] Open
|
43
|
Hesse C, Miller L, Buckingham G. Visual information about object size and object position are retained differently in the visual brain: Evidence from grasping studies. Neuropsychologia 2016; 91:531-543. [PMID: 27663865 DOI: 10.1016/j.neuropsychologia.2016.09.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 07/29/2016] [Accepted: 09/20/2016] [Indexed: 10/21/2022]
Abstract
Many experiments have examined how the visual information used for action control is represented in our brain, and whether or not visually-guided and memory-guided hand movements rely on dissociable visual representations that are processed in different brain areas (dorsal vs. ventral). However, little is known about how these representations decay over longer time periods and whether or not different visual properties are retained in a similar fashion. In three experiments we investigated how information about object size and object position affect grasping as visual memory demands increase. We found that position information decayed rapidly with increasing delays between viewing the object and initiating subsequent actions - impacting both the accuracy of the transport component (lower end-point accuracy) and the grasp component (larger grip apertures) of the movement. In contrast, grip apertures and fingertip forces remained well-adjusted to target size in conditions in which positional information was either irrelevant or provided, regardless of delay, indicating that object size is encoded in a more stable manner than object position. The findings provide evidence that different grasp-relevant properties are encoded differently by the visual system. Furthermore, we argue that caution is required when making inferences about object size representations based on alterations in the grip component as these variations are confounded with the accuracy with which object position is represented. Instead fingertip forces seem to provide a reliable and confound-free measure to assess internal size estimations in conditions of increased visual uncertainty.
Collapse
Affiliation(s)
| | - Louisa Miller
- Department of Psychiatry, University of Cambridge, UK
| | - Gavin Buckingham
- Department of Sport and Health Sciences, University of Exeter, UK
| |
Collapse
|
44
|
Copley-Mills J, Connolly JD, Cavina-Pratesi C. Gender differences in non-standard mapping tasks: A kinematic study using pantomimed reach-to-grasp actions. Cortex 2016; 82:244-254. [PMID: 27410715 DOI: 10.1016/j.cortex.2016.06.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2015] [Revised: 03/05/2016] [Accepted: 06/10/2016] [Indexed: 11/20/2022]
Abstract
Comparison between real and pantomimed actions is used in neuroscience to dissociate stimulus-driven (real) as compared to internally driven (pantomimed) visuomotor transformations, with the goal of testing models of vision (Milner & Goodale, 1995) and diagnosing neuropsychological deficits (apraxia syndrome). Real actions refer to an overt movement directed toward a visible target whereas pantomimed actions refer to an overt movement directed either toward an object that is no longer available. Although similar, real and pantomimed actions differ in their kinematic parameters and in their neural substrates. Pantomimed-reach-to-grasp-actions show reduced reaching velocities, higher wrist movements, and reduced grip apertures. In addition, seminal neuropsychological studies and recent neuroimaging findings confirmed that real and pantomimed actions are underpinned by separate brain networks. Although previous literature suggests differences in the praxis system between males and females, no research to date has investigated whether or not gender differences exist in the context of real versus pantomimed reach-to-grasp actions. We asked ten male and ten female participants to perform real and pantomimed reach-to-grasp actions toward objects of different sizes, either with or without visual feedback. During pantomimed actions participants were required to pick up an imaginary object slightly offset relative to the location of the real one (which was in turn the target of the real reach-to-grasp actions). Results demonstrate a significant difference between the kinematic parameters recorded in male and female participants performing pantomimed, but not real reach-to-grasp tasks, depending on the availability of visual feedback. With no feedback both males and females showed smaller grip aperture, slower movement velocity and lower reach height. Crucially, these same differences were abolished when visual feedback was available in male, but not in female participants. Our results suggest that male and female participants should be evaluated separately in the clinical environment and in future research in the field.
Collapse
|
45
|
Ferretti G. Through the forest of motor representations. Conscious Cogn 2016; 43:177-96. [PMID: 27310110 DOI: 10.1016/j.concog.2016.05.013] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Revised: 05/26/2016] [Accepted: 05/30/2016] [Indexed: 10/21/2022]
Abstract
Following neuroscience, and using different labels, several philosophers have addressed the idea of the presence of a single representational mechanism lying in between (visual) perceptual processes and motor processes involved in different functions and useful for shaping suitable action performances: a motor representation (MR). MRs are the naturalized mental antecedents of action. This paper presents a new, non-monolithic view of MRs, according to which, contrarily to the received view, when looking at in between (visual) perceptual processes and motor processes, we find not only a single representational mechanism with different functions, but an ensemble of different sub-representational phenomena, each of which with a different function. This new view is able to avoid several issues emerging from the literature and to address something the literature is silent about, which however turns out to be crucial for a theory of MRs.
Collapse
Affiliation(s)
- Gabriele Ferretti
- Department of Pure and Applied Science, University of Urbino Carlo Bo, Via Timoteo Viti, 10, 61029 Urbino, PU, Italy; Centre for Philosophical Psychology, University of Antwerp, S.S. 208, Lange Sint Annastraat 7, 2000 Antwerpen, Belgium.
| |
Collapse
|
46
|
Beta band modulations underlie action representations for movement planning. Neuroimage 2016; 136:197-207. [PMID: 27173760 DOI: 10.1016/j.neuroimage.2016.05.027] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2015] [Revised: 05/04/2016] [Accepted: 05/08/2016] [Indexed: 11/20/2022] Open
Abstract
To be able to interact with our environment, we need to transform incoming sensory information into goal-directed motor outputs. Whereas our ability to plan an appropriate movement based on sensory information appears effortless and simple, the underlying brain dynamics are still largely unknown. Here we used magnetoencephalography (MEG) to investigate this issue by recording brain activity during the planning of non-visually guided reaching and grasping actions, performed with either the left or right hand. Adopting a combination of univariate and multivariate analyses, we revealed specific patterns of beta power modulations underlying varying levels of neural representations during movement planning. (1) Effector-specific modulations were evident as a decrease in power in the beta band. Within both hemispheres, this decrease was stronger while planning a movement with the contralateral hand. (2) The comparison of planned grasping and reaching led to a relative increase in power in the beta band. These power changes were localized within temporal, premotor and posterior parietal cortices. Action-related modulations overlapped with effector-related beta power changes within widespread frontal and parietal regions, suggesting the possible integration of these two types of neural representations. (3) Multivariate analyses of action-specific power changes revealed that part of this broadband beta modulation also contributed to the encoding of an effector-independent neural representation of a planned action within fronto-parietal and temporal regions. Our results suggest that beta band power modulations play a central role in movement planning, within both the dorsal and ventral stream, by coding and integrating different levels of neural representations, ranging from the simple representation of the to-be-moved effector up to an abstract, effector-independent representation of the upcoming action.
Collapse
|
47
|
Van Dromme IC, Premereur E, Verhoef BE, Vanduffel W, Janssen P. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision. PLoS Biol 2016; 14:e1002445. [PMID: 27082854 PMCID: PMC4833303 DOI: 10.1371/journal.pbio.1002445] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2015] [Accepted: 03/18/2016] [Indexed: 11/18/2022] Open
Abstract
The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams.
Collapse
Affiliation(s)
- Ilse C. Van Dromme
- KU Leuven, Laboratorium voor Neuro- en Psychofysiologie, Leuven, Belgium
| | - Elsie Premereur
- KU Leuven, Laboratorium voor Neuro- en Psychofysiologie, Leuven, Belgium
| | - Bram-Ernst Verhoef
- KU Leuven, Laboratorium voor Neuro- en Psychofysiologie, Leuven, Belgium
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
| | - Wim Vanduffel
- KU Leuven, Laboratorium voor Neuro- en Psychofysiologie, Leuven, Belgium
- Harvard Medical School, Boston, Massachusetts, United States of America
- MGH Martinos Center for Biomedical Imaging, Charlestown, Massachusetts, United States of America
| | - Peter Janssen
- KU Leuven, Laboratorium voor Neuro- en Psychofysiologie, Leuven, Belgium
- * E-mail:
| |
Collapse
|
48
|
Cornelsen S, Rennig J, Himmelbach M. Memory-guided reaching in a patient with visual hemiagnosia. Cortex 2016; 79:32-41. [PMID: 27085893 DOI: 10.1016/j.cortex.2016.03.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Revised: 01/15/2016] [Accepted: 03/08/2016] [Indexed: 10/22/2022]
Abstract
The two-visual-systems hypothesis (TVSH) postulates that memory-guided movements rely on intact functions of the ventral stream. Its particular importance for memory-guided actions was initially inferred from behavioral dissociations in the well-known patient DF. Despite of rather accurate reaching and grasping movements to visible targets, she demonstrated grossly impaired memory-guided grasping as much as impaired memory-guided reaching. These dissociations were later complemented by apparently reversed dissociations in patients with dorsal damage and optic ataxia. However, grasping studies in DF and optic ataxia patients differed with respect to the retinotopic position of target objects, questioning the interpretation of the respective findings as a double dissociation. In contrast, the findings for reaching errors in both types of patients came from similar peripheral target presentations. However, new data on brain structural changes and visuomotor deficits in DF also questioned the validity of a double dissociation in reaching. A severe visuospatial short-term memory deficit in DF further questioned the specificity of her memory-guided reaching deficit. Therefore, we compared movement accuracy in visually-guided and memory-guided reaching in a new patient who suffered a confined unilateral damage to the ventral visual system due to stroke. Our results indeed support previous descriptions of memory-guided movements' inaccuracies in DF. Furthermore, our data suggest that recently discovered optic-ataxia like misreaching in DF is most likely caused by her parieto-occipital and not by her ventral stream damage. Finally, multiple visuospatial memory measurements in HWS suggest that inaccuracies in memory-guided reaching tasks in patients with ventral damage cannot be explained by visuospatial short-term memory or perceptual deficits, but by a specific deficit in visuomotor processing.
Collapse
Affiliation(s)
- Sonja Cornelsen
- Center for Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, Eberhard Karls University, Tuebingen, Germany; IMPRS for Cognitive and Systems Neuroscience, Tuebingen, Germany.
| | - Johannes Rennig
- Center for Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, Eberhard Karls University, Tuebingen, Germany; Knowledge Media Research Center, Neurocognition Lab, IWM-KMRC, Tübingen, Germany
| | - Marc Himmelbach
- Center for Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, Eberhard Karls University, Tuebingen, Germany; Centre for Integrative Neuroscience, Eberhard Karls University, Tuebingen, Germany
| |
Collapse
|
49
|
Hall AJ, Butler BE, Lomber SG. The cat's meow: A high-field fMRI assessment of cortical activity in response to vocalizations and complex auditory stimuli. Neuroimage 2016; 127:44-57. [DOI: 10.1016/j.neuroimage.2015.11.056] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2015] [Revised: 11/22/2015] [Accepted: 11/24/2015] [Indexed: 01/26/2023] Open
|
50
|
Marangon M, Kubiak A, Króliczak G. Haptically Guided Grasping. fMRI Shows Right-Hemisphere Parietal Stimulus Encoding, and Bilateral Dorso-Ventral Parietal Gradients of Object- and Action-Related Processing during Grasp Execution. Front Hum Neurosci 2016; 9:691. [PMID: 26779002 PMCID: PMC4700263 DOI: 10.3389/fnhum.2015.00691] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 12/07/2015] [Indexed: 11/13/2022] Open
Abstract
The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality.
Collapse
Affiliation(s)
- Mattia Marangon
- Action and Cognition Laboratory, Department of Social Sciences, Institute of Psychology, Adam Mickiewicz University in Poznań Poznań, Poland
| | - Agnieszka Kubiak
- Action and Cognition Laboratory, Department of Social Sciences, Institute of Psychology, Adam Mickiewicz University in Poznań Poznań, Poland
| | - Gregory Króliczak
- Action and Cognition Laboratory, Department of Social Sciences, Institute of Psychology, Adam Mickiewicz University in Poznań Poznań, Poland
| |
Collapse
|