1
|
Cesanek E, Shivkumar S, Ingram JN, Wolpert DM. Ouvrai opens access to remote virtual reality studies of human behavioural neuroscience. Nat Hum Behav 2024:10.1038/s41562-024-01834-7. [PMID: 38671286 DOI: 10.1038/s41562-024-01834-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 01/18/2024] [Indexed: 04/28/2024]
Abstract
Modern virtual reality (VR) devices record six-degree-of-freedom kinematic data with high spatial and temporal resolution and display high-resolution stereoscopic three-dimensional graphics. These capabilities make VR a powerful tool for many types of behavioural research, including studies of sensorimotor, perceptual and cognitive functions. Here we introduce Ouvrai, an open-source solution that facilitates the design and execution of remote VR studies, capitalizing on the surge in VR headset ownership. This tool allows researchers to develop sophisticated experiments using cutting-edge web technologies such as WebXR to enable browser-based VR, without compromising on experimental design. Ouvrai's features include easy installation, intuitive JavaScript templates, a component library managing front- and backend processes and a streamlined workflow. It integrates with Firebase, Prolific and Amazon Mechanical Turk and provides data processing utilities for analysis. Unlike other tools, Ouvrai remains free, with researchers managing their web hosting and cloud database via personal Firebase accounts. Ouvrai is not limited to VR studies; researchers can also develop and run desktop or touchscreen studies using the same streamlined workflow. Through three distinct motor learning experiments, we confirm Ouvrai's efficiency and viability for conducting remote VR studies.
Collapse
Affiliation(s)
- Evan Cesanek
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| | - Sabyasachi Shivkumar
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
| | - James N Ingram
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
| | - Daniel M Wolpert
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
| |
Collapse
|
2
|
Rasman BG, Blouin JS, Nasrabadi AM, van Woerkom R, Frens MA, Forbes PA. Learning to stand with sensorimotor delays generalizes across directions and from hand to leg effectors. Commun Biol 2024; 7:384. [PMID: 38553561 PMCID: PMC10980713 DOI: 10.1038/s42003-024-06029-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 03/08/2024] [Indexed: 04/02/2024] Open
Abstract
Humans receive sensory information from the past, requiring the brain to overcome delays to perform daily motor skills such as standing upright. Because delays vary throughout the body and change over a lifetime, it would be advantageous to generalize learned control policies of balancing with delays across contexts. However, not all forms of learning generalize. Here, we use a robotic simulator to impose delays into human balance. When delays are imposed in one direction of standing, participants are initially unstable but relearn to balance by reducing the variability of their motor actions and transfer balance improvements to untrained directions. Upon returning to normal standing, aftereffects from learning are observed as small oscillations in control, yet they do not destabilize balance. Remarkably, when participants train to balance with delays using their hand, learning transfers to standing with the legs. Our findings establish that humans use experience to broadly update their neural control to balance with delays.
Collapse
Affiliation(s)
- Brandon G Rasman
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
- School of Physical Education, Sport and Exercise Sciences, University of Otago, Dunedin, New Zealand
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Jean-Sébastien Blouin
- School of Kinesiology, University of British Columbia, Vancouver, BC, Canada
- Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, BC, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada
| | - Amin M Nasrabadi
- School of Kinesiology, University of British Columbia, Vancouver, BC, Canada
| | - Remco van Woerkom
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Maarten A Frens
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Patrick A Forbes
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands.
| |
Collapse
|
3
|
Zhang Z, Cesanek E, Ingram JN, Flanagan JR, Wolpert DM. Object weight can be rapidly predicted, with low cognitive load, by exploiting learned associations between the weights and locations of objects. J Neurophysiol 2023; 129:285-297. [PMID: 36350057 PMCID: PMC9886355 DOI: 10.1152/jn.00414.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 10/30/2022] [Indexed: 11/11/2022] Open
Abstract
Weight prediction is critical for dexterous object manipulation. Previous work has focused on lifting objects presented in isolation and has examined how the visual appearance of an object is used to predict its weight. Here we tested the novel hypothesis that when interacting with multiple objects, as is common in everyday tasks, people exploit the locations of objects to directly predict their weights, bypassing slower and more demanding processing of visual properties to predict weight. Using a three-dimensional robotic and virtual reality system, we developed a task in which participants were presented with a set of objects. In each trial a randomly chosen object translated onto the participant's hand and they had to anticipate the object's weight by generating an equivalent upward force. Across conditions we could control whether the visual appearance and/or location of the objects were informative as to their weight. Using this task, and a set of analogous web-based experiments, we show that when location information was predictive of the objects' weights participants used this information to achieve faster prediction than observed when prediction is based on visual appearance. We suggest that by "caching" associations between locations and weights, the sensorimotor system can speed prediction while also lowering working memory demands involved in predicting weight from object visual properties.NEW & NOTEWORTHY We use a novel object support task using a three-dimensional robotic interface and virtual reality system to provide evidence that the locations of objects are used to predict their weights. Using location information, rather than the visual appearance of the objects, supports fast prediction, thereby avoiding processes that can be demanding on working memory.
Collapse
Affiliation(s)
- Zhaoran Zhang
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| | - Evan Cesanek
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| | - James N Ingram
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| | - J Randall Flanagan
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Daniel M Wolpert
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| |
Collapse
|
4
|
Are tools truly incorporated as an extension of the body representation?: Assessing the evidence for tool embodiment. Psychon Bull Rev 2022; 29:343-368. [PMID: 35322322 DOI: 10.3758/s13423-021-02032-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/18/2021] [Indexed: 11/08/2022]
Abstract
The predominant view on human tool-use suggests that an action-oriented body representation, the body schema, is altered to fit the tool being wielded, a phenomenon termed tool embodiment. While observations of perceptual change after tool-use purport to support this hypothesis, several issues undermine their validity in this context, discussed at length in this critical review. The primary measures used as indicators of tool embodiment each face unique challenges to their construct validity. Further, the perceptual changes taken as indicating extension of the body representation only appear to account for a fraction of the tool's size in any given experiment, and do not demonstrate the covariance with tool length that the embodiment hypothesis would predict. The expression of tool embodiment also appears limited to a narrow range of tool-use tasks, as deviations from a simple reaching paradigm can mollify or eliminate embodiment effects altogether. The shortcomings identified here generate important avenues for future research. Until the source of the kinematic and perceptual effects that have substantiated tool embodiment is disambiguated, the hypothesis that the body representation changes to fit tools during tool-use should not be favored over other possibilities such as the formation of separable internal tool models, which seem to offer a more complete account of human tool-use behaviors. Indeed, studies of motor learning have observed analogous perceptual changes as aftereffects to adaptation despite the absence of handheld tool-use, offering a compelling alternative explanation, though more work is needed to confirm this possibility.
Collapse
|
5
|
Mariscal DM, Vasudevan EVL, Malone LA, Torres-Oviedo G, Bastian AJ. Context-Specificity of Locomotor Learning Is Developed during Childhood. eNeuro 2022; 9:ENEURO.0369-21.2022. [PMID: 35346963 PMCID: PMC9036623 DOI: 10.1523/eneuro.0369-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 10/25/2021] [Accepted: 02/17/2022] [Indexed: 12/02/2022] Open
Abstract
Humans can perform complex movements with speed and agility in the face of constantly changing task demands. To accomplish this, motor plans are adapted to account for errors in our movements because of changes in our body (e.g., growth or injury) or in the environment (e.g., walking on sand vs ice). It has been suggested that adaptation that occurs in response to changes in the state of our body will generalize across different movement contexts and environments, whereas adaptation that occurs with alterations in the external environment will be context-specific. Here, we asked whether the ability to form generalizable versus context-specific motor memories develops during childhood. We performed a cross-sectional study of context-specific locomotor adaptation in 35 children (3-18 years old) and 7 adults (19-31 years old). Subjects first adapted their gait and learned a new walking pattern on a split-belt treadmill, which has two belts that move each leg at a different speed. Then, subjects walked overground to assess the generalization of the adapted walking pattern across different environments. Our results show that the generalization of treadmill after-effects to overground walking decreases as subjects' age increases, indicating that age and experience are critical factors regulating the specificity of motor learning. Our results suggest that although basic locomotor patterns are established by two years of age, brain networks required for context-specific locomotor learning are still being developed throughout youth.
Collapse
Affiliation(s)
- Dulce M Mariscal
- Bioengineering Department, University of Pittsburgh, Pittsburgh, PA 15260
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA 15213
| | - Erin V L Vasudevan
- Kennedy Krieger Institute, Baltimore, MD, 21205
- School of Health Technology and Management, Stony Brook University, Stony Brook, NY, 11794
| | - Laura A Malone
- Neurology Department, Johns Hopkins University, Baltimore, MD, 21205
- Physical Medicine, and Rehabilitation Department, Johns Hopkins University, Baltimore, MD, 21205
- Kennedy Krieger Institute, Baltimore, MD, 21205
| | - Gelsy Torres-Oviedo
- Bioengineering Department, University of Pittsburgh, Pittsburgh, PA 15260
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA 15213
| | - Amy J Bastian
- Neuroscience Department, Johns Hopkins University, Baltimore, MD, 21205
- Kennedy Krieger Institute, Baltimore, MD, 21205
| |
Collapse
|
6
|
Mangalam M, Fragaszy DM, Wagman JB, Day BM, Kelty-Stephen DG, Bongers RM, Stout DW, Osiurak F. On the psychological origins of tool use. Neurosci Biobehav Rev 2022; 134:104521. [PMID: 34998834 DOI: 10.1016/j.neubiorev.2022.104521] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 12/01/2021] [Accepted: 01/01/2022] [Indexed: 01/13/2023]
Abstract
The ubiquity of tool use in human life has generated multiple lines of scientific and philosophical investigation to understand the development and expression of humans' engagement with tools and its relation to other dimensions of human experience. However, existing literature on tool use faces several epistemological challenges in which the same set of questions generate many different answers. At least four critical questions can be identified, which are intimately intertwined-(1) What constitutes tool use? (2) What psychological processes underlie tool use in humans and nonhuman animals? (3) Which of these psychological processes are exclusive to tool use? (4) Which psychological processes involved in tool use are exclusive to Homo sapiens? To help advance a multidisciplinary scientific understanding of tool use, six author groups representing different academic disciplines (e.g., anthropology, psychology, neuroscience) and different theoretical perspectives respond to each of these questions, and then point to the direction of future work on tool use. We find that while there are marked differences among the responses of the respective author groups to each question, there is a surprising degree of agreement about many essential concepts and questions. We believe that this interdisciplinary and intertheoretical discussion will foster a more comprehensive understanding of tool use than any one of these perspectives (or any one of these author groups) would (or could) on their own.
Collapse
Affiliation(s)
- Madhur Mangalam
- Department of Physical Therapy, Movement and Rehabilitation Science, Northeastern University, Boston, Massachusetts 02115, USA.
| | | | - Jeffrey B Wagman
- Department of Psychology, Illinois State University, Normal, IL 61761, USA
| | - Brian M Day
- Department of Psychology, Butler University, Indianapolis, IN 46208, USA
| | | | - Raoul M Bongers
- Department of Human Movement Sciences, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, Netherlands
| | - Dietrich W Stout
- Department of Anthropology, Emory University, Atlanta, GA 30322, USA
| | - François Osiurak
- Laboratoire d'Etude des Mécanismes Cognitifs, Université de Lyon, Lyon 69361, France; Institut Universitaire de France, Paris 75231, France
| |
Collapse
|
7
|
Forano M, Schween R, Taylor JA, Hegele M, Franklin DW. Direct and indirect cues can enable dual adaptation, but through different learning processes. J Neurophysiol 2021; 126:1490-1506. [PMID: 34550024 DOI: 10.1152/jn.00166.2021] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Switching between motor tasks requires accurate adjustments for changes in dynamics (grasping a cup) or sensorimotor transformations (moving a computer mouse). Dual-adaptation studies have investigated how learning of context-dependent dynamics or transformations is enabled by sensory cues. However, certain cues, such as color, have shown mixed results. We propose that these mixed results may arise from two major classes of cues: "direct" cues, which are part of the dynamic state and "indirect" cues, which are not. We hypothesized that explicit strategies would primarily account for the adaptation of an indirect color cue but would be limited to simple tasks, whereas a direct visual separation cue would allow implicit adaptation regardless of task complexity. To test this idea, we investigated the relative contribution of implicit and explicit learning in relation to contextual cue type (colored or visually shifted workspace) and task complexity (1 or 8 targets) in a dual-adaptation task. We found that the visual workspace location cue enabled adaptation across conditions primarily through implicit adaptation. In contrast, we found that the color cue was largely ineffective for dual adaptation, except in a small subset of participants who appeared to use explicit strategies. Our study suggests that the previously inconclusive role of color cues in dual adaptation may be explained by differential contribution of explicit strategies across conditions.NEW & NOTEWORTHY We present evidence that learning of context-dependent dynamics proceeds via different processes depending on the type of sensory cue used to signal the context. Visual workspace location enabled learning different dynamics implicitly, presumably because it directly enters the dynamic state estimate. In contrast, a color cue was only successful where learners were apparently able to leverage explicit strategies to account for changed dynamics. This suggests a unification for the previously inconclusive role of color cues.
Collapse
Affiliation(s)
- Marion Forano
- Department of Sport and Health Sciences, Technical University of Munich, Munich, Germany
| | - Raphael Schween
- Department of Psychology and Sport Science, Justus Liebig University, Giessen, Germany.,Department of Psychology, Philipps-University, Marburg, Germany
| | - Jordan A Taylor
- Department of Psychology, Princeton University, Princeton, New Jersey
| | - Mathias Hegele
- Department of Psychology and Sport Science, Justus Liebig University, Giessen, Germany.,Center for Mind, Brain and Behavior, Universities of Marburg and Giessen, Marburg and Giessen, Germany
| | - David W Franklin
- Department of Sport and Health Sciences, Technical University of Munich, Munich, Germany.,Munich Institute of Robotics and Machine Intelligence, Technical University of Munich, Munich, Germany.,Munich Data Science Institute, Technical University of Munich, Munich, Germany
| |
Collapse
|
8
|
Ziaeetabar F, Pomp J, Pfeiffer S, El-Sourani N, Schubotz RI, Tamosiunaite M, Wörgötter F. Using enriched semantic event chains to model human action prediction based on (minimal) spatial information. PLoS One 2020; 15:e0243829. [PMID: 33370343 PMCID: PMC7769489 DOI: 10.1371/journal.pone.0243829] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 11/26/2020] [Indexed: 11/23/2022] Open
Abstract
Predicting other people’s upcoming action is key to successful social interactions. Previous studies have started to disentangle the various sources of information that action observers exploit, including objects, movements, contextual cues and features regarding the acting person’s identity. We here focus on the role of static and dynamic inter-object spatial relations that change during an action. We designed a virtual reality setup and tested recognition speed for ten different manipulation actions. Importantly, all objects had been abstracted by emulating them with cubes such that participants could not infer an action using object information. Instead, participants had to rely only on the limited information that comes from the changes in the spatial relations between the cubes. In spite of these constraints, participants were able to predict actions in, on average, less than 64% of the action’s duration. Furthermore, we employed a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of different types of spatial relations: (a) objects’ touching/untouching, (b) static spatial relations between objects and (c) dynamic spatial relations between objects during an action. Assuming the eSEC as an underlying model, we show, using information theoretical analysis, that humans mostly rely on a mixed-cue strategy when predicting actions. Machine-based action prediction is able to produce faster decisions based on individual cues. We argue that human strategy, though slower, may be particularly beneficial for prediction of natural and more complex actions with more variable or partial sources of information. Our findings contribute to the understanding of how individuals afford inferring observed actions’ goals even before full goal accomplishment, and may open new avenues for building robots for conflict-free human-robot cooperation.
Collapse
Affiliation(s)
- Fatemeh Ziaeetabar
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany
- * E-mail:
| | - Jennifer Pomp
- Department of Psychology, University of Münster, Münster, Germany
| | - Stefan Pfeiffer
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany
| | | | | | - Minija Tamosiunaite
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany
- Department of Informatics, Vytautas Magnus University, Kaunas, Lithuania
| | - Florentin Wörgötter
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany
| |
Collapse
|
9
|
Mariscal DM, Iturralde PA, Torres-Oviedo G. Altering attention to split-belt walking increases the generalization of motor memories across walking contexts. J Neurophysiol 2020; 123:1838-1848. [PMID: 32233897 DOI: 10.1152/jn.00509.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Little is known about the impact of attention during motor adaptation tasks on how movements adapted in one context generalize to another. We investigated this by manipulating subjects' attention to their movements while exposing them to split-belt walking (i.e., legs moving at different speeds), which is known to induce locomotor adaptation. We hypothesized that reducing subjects' attention to their movements by distracting them as they adapted their walking pattern would facilitate the generalization of recalibrated movements beyond the training environment. We reasoned that awareness of the novel split-belt condition could be used to consciously contextualize movements to that particular situation. To test this hypothesis, young adults adapted their gait on a split-belt treadmill while they observed visual information that either distracted them or made them aware of the belt's speed difference. We assessed adaptation and aftereffects of spatial and temporal gait features known to adapt and generalize differently in different environments. We found that all groups adapted similarly by reaching the same steady-state values for all gait parameters at the end of the adaptation period. In contrast, both groups with altered attention to the split-belts environment (distraction and awareness groups) generalized their movements from the treadmill to overground more than controls, who walked without altered attention. This was specifically observed in the generalization of step time (temporal gait feature), which might be less susceptible to online corrections during walking overground. These results suggest that altering attention to one's movements during sensorimotor adaptation facilitates the generalization of movement recalibration.NEW & NOTEWORTHY Little is known about how attention affects the generalization of motor recalibration induced by sensorimotor adaptation paradigms. We showed that altering attention to movements on a split-belt treadmill led to greater adaptation effects in subjects walking overground. Thus our results suggest that altering patients' attention to their actions during sensorimotor adaptation protocols could lead to greater generalization of corrected movements when moving without the training device.
Collapse
Affiliation(s)
- Dulce M Mariscal
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Pablo A Iturralde
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Gelsy Torres-Oviedo
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
10
|
Maeda RS, Zdybal JM, Gribble PL, Pruszynski JA. Generalizing movement patterns following shoulder fixation. J Neurophysiol 2020; 123:1193-1205. [PMID: 32101490 DOI: 10.1152/jn.00696.2019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Generalizing newly learned movement patterns beyond the training context is challenging for most motor learning situations. Here we tested whether learning of a new physical property of the arm during self-initiated reaching generalizes to new arm configurations. Human participants performed a single-joint elbow reaching task and/or countered mechanical perturbations that created pure elbow motion with the shoulder joint free to rotate or locked by the manipulandum. With the shoulder free, we found activation of shoulder extensor muscles for pure elbow extension trials, appropriate for countering torques that arise at the shoulder due to forearm rotation. After locking the shoulder joint, we found a partial reduction in shoulder muscle activity, appropriate because locking the shoulder joint cancels the torques that arise at the shoulder due to forearm rotation. In our first three experiments, we tested whether and to what extent this partial reduction in shoulder muscle activity generalizes when reaching in different situations: 1) different initial shoulder orientation, 2) different initial elbow orientation, and 3) different reach distance/speed. We found generalization for the different shoulder orientation and reach distance/speed as measured by a reliable reduction in shoulder activity in these situations but no generalization for the different elbow orientation. In our fourth experiment, we found that generalization is also transferred to feedback control by applying mechanical perturbations and observing reflex responses in a distinct shoulder orientation. These results indicate that partial learning of new intersegmental dynamics is not sufficient for modifying a general internal model of arm dynamics.NEW & NOTEWORTHY Here we show that partially learning to reduce shoulder muscle activity following shoulder fixation generalizes to other movement conditions, but it does not generalize globally. These findings suggest that the partial learning of new intersegmental dynamics is not sufficient for modifying a general internal model of the arm's dynamics.
Collapse
Affiliation(s)
- Rodrigo S Maeda
- Brain and Mind Institute, Western University, London, Ontario, Canada.,Robarts Research Institute, Western University, London, Ontario, Canada.,Department of Psychology, Western University, London, Ontario, Canada
| | - Julia M Zdybal
- Brain and Mind Institute, Western University, London, Ontario, Canada.,Robarts Research Institute, Western University, London, Ontario, Canada.,Department of Physiology and Pharmacology, Western University, London, Ontario, Canada
| | - Paul L Gribble
- Brain and Mind Institute, Western University, London, Ontario, Canada.,Department of Psychology, Western University, London, Ontario, Canada.,Department of Physiology and Pharmacology, Western University, London, Ontario, Canada
| | - J Andrew Pruszynski
- Brain and Mind Institute, Western University, London, Ontario, Canada.,Robarts Research Institute, Western University, London, Ontario, Canada.,Department of Psychology, Western University, London, Ontario, Canada.,Department of Physiology and Pharmacology, Western University, London, Ontario, Canada
| |
Collapse
|
11
|
Hasson CJ, Jalili PF. Visual dynamics cues in learning complex physical interactions. Sci Rep 2019; 9:13496. [PMID: 31534158 PMCID: PMC6751185 DOI: 10.1038/s41598-019-49637-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 08/28/2019] [Indexed: 11/09/2022] Open
Abstract
This study investigated the role of visual dynamics cues (VDCs) in learning to interact with a complex physical system. Manual gait training was used as an exemplary case, as it requires therapists to control the non-trivial locomotor dynamics of patients. A virtual analog was developed that allowed naïve subjects to manipulate the leg of a virtual stroke survivor (a virtual patient; VP) walking on a treadmill using a small robotic manipulandum. The task was to make the VP's leg pass through early, mid, and late swing gait targets. One group of subjects (n = 17) started practice seeing the VP's affected thigh and shank (i.e., VDCs); a second control group (n = 16) only saw the point-of-contact (VP ankle). It was hypothesized that, if seeing the VP's leg provides beneficial dynamics information, the VDC group would have better task performance and generalization than controls. Results were not supportive. Both groups had similar task performance, and for the late swing gait target, a decrement in manipulative accuracy was observed when VDCs were removed in a generalization task. This suggests that when learning to manipulate complex dynamics, VDCs can create a dependency that negatively affects generalization if the visual context is changed.
Collapse
Affiliation(s)
- Christopher J Hasson
- Neuromotor Systems Laboratory, Northeastern University, Boston, MA, USA. .,Departments of Physical Therapy, Movement and Rehabilitation Sciences, Bioengineering, and Biology, Boston, USA.
| | - Paneed F Jalili
- Neuromotor Systems Laboratory, Northeastern University, Boston, MA, USA.,Departments of Physical Therapy, Movement and Rehabilitation Sciences, Bioengineering, and Biology, Boston, USA
| |
Collapse
|
12
|
|
13
|
Sadeghi M, Sheahan HR, Ingram JN, Wolpert DM. The visual geometry of a tool modulates generalization during adaptation. Sci Rep 2019; 9:2731. [PMID: 30804540 PMCID: PMC6389992 DOI: 10.1038/s41598-019-39507-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 01/22/2019] [Indexed: 11/23/2022] Open
Abstract
Knowledge about a tool’s dynamics can be acquired from the visual configuration of the tool and through physical interaction. Here, we examine how visual information affects the generalization of dynamic learning during tool use. Subjects rotated a virtual hammer-like object while we varied the object dynamics separately for two rotational directions. This allowed us to quantify the coupling of adaptation between the directions, that is, how adaptation transferred from one direction to the other. Two groups experienced the same dynamics of the object. For one group, the object’s visual configuration was displayed, while for the other, the visual display was uninformative as to the dynamics. We fit a range of context-dependent state-space models to the data, comparing different forms of coupling. We found that when the object’s visual configuration was explicitly provided, there was substantial coupling, such that 31% of learning in one direction transferred to the other. In contrast, when the visual configuration was ambiguous, despite experiencing the same dynamics, the coupling was reduced to 12%. Our results suggest that generalization of dynamic learning of a tool relies, not only on its dynamic behaviour, but also on the visual configuration with which the dynamics is associated.
Collapse
Affiliation(s)
- Mohsen Sadeghi
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK.
| | - Hannah R Sheahan
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK
| | - James N Ingram
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK.,Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, United States
| | - Daniel M Wolpert
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK.,Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, United States
| |
Collapse
|
14
|
Hasson CJ, Goodman SE. Learning to shape virtual patient locomotor patterns: internal representations adapt to exploit interactive dynamics. J Neurophysiol 2019; 121:321-335. [PMID: 30403561 PMCID: PMC6383669 DOI: 10.1152/jn.00408.2018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 10/19/2018] [Accepted: 11/01/2018] [Indexed: 11/22/2022] Open
Abstract
This work aimed to understand the sensorimotor processes used by humans when learning how to manipulate a virtual model of locomotor dynamics. Prior research shows that when interacting with novel dynamics humans develop internal models that map neural commands to limb motion and vice versa. Whether this can be extrapolated to locomotor rehabilitation, a continuous and rhythmic activity that involves dynamically complex interactions, is unknown. In this case, humans could default to model-free strategies. These competing hypotheses were tested with a novel interactive locomotor simulator that reproduced the dynamics of hemiparetic gait. A group of 16 healthy subjects practiced using a small robotic manipulandum to alter the gait of a virtual patient (VP) that had an asymmetric locomotor pattern modeled after stroke survivors. The point of interaction was the ankle of the VP's affected leg, and the goal was to make the VP's gait symmetric. Internal model formation was probed with unexpected force channels and null force fields. Generalization was assessed by changing the target locomotor pattern and comparing outcomes with a second group of 10 naive subjects who did not practice the initial symmetric target pattern. Results supported the internal model hypothesis with aftereffects and generalization of manipulation skill. Internal models demonstrated refinements that capitalized on the natural pendular dynamics of human locomotion. This work shows that despite the complex interactive dynamics involved in shaping locomotor patterns, humans nevertheless develop and use internal models that are refined with experience. NEW & NOTEWORTHY This study aimed to understand how humans manipulate the physics of locomotion, a common task for physical therapists during locomotor rehabilitation. To achieve this aim, a novel locomotor simulator was developed that allowed participants to feel like they were manipulating the leg of a miniature virtual stroke survivor walking on a treadmill. As participants practiced improving the simulated patient's gait, they developed generalizable internal models that capitalized on the natural pendular dynamics of locomotion.
Collapse
Affiliation(s)
- Christopher J Hasson
- Neuromotor Systems Laboratory, Department of Physical Therapy, Movement, and Rehabilitation Sciences, Northeastern University , Boston, Massachusetts
- Department of Bioengineering, Northeastern University , Boston, Massachusetts
- Department of Biology, Northeastern University , Boston, Massachusetts
| | - Sarah E Goodman
- Department of Bioengineering, Northeastern University , Boston, Massachusetts
| |
Collapse
|
15
|
Farshchian A, Sciutti A, Pressman A, Nisky I, Mussa-Ivaldi FA. Energy exchanges at contact events guide sensorimotor integration. eLife 2018; 7:32587. [PMID: 29809144 PMCID: PMC5990365 DOI: 10.7554/elife.32587] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2017] [Accepted: 05/13/2018] [Indexed: 11/13/2022] Open
Abstract
The brain must consider the arm’s inertia to predict the arm's movements elicited by commands impressed upon the muscles. Here, we present evidence suggesting that the integration of sensory information leading to the representation of the arm's inertia does not take place continuously in time but only at discrete transient events, in which kinetic energy is exchanged between the arm and the environment. We used a visuomotor delay to induce cross-modal variations in state feedback and uncovered that the difference between visual and proprioceptive velocity estimations at isolated collision events was compensated by a change in the representation of arm inertia. The compensation maintained an invariant estimate across modalities of the expected energy exchange with the environment. This invariance captures different types of dysmetria observed across individuals following prolonged exposure to a fixed intermodal temporal perturbation and provides a new interpretation for cerebellar ataxia.
Collapse
Affiliation(s)
- Ali Farshchian
- Department of Biomedical Engineering, Northwestern University, Evanston, United States.,Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, United States
| | - Alessandra Sciutti
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, United States.,Department of Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy
| | - Assaf Pressman
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Ilana Nisky
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beersheba, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Ferdinando A Mussa-Ivaldi
- Department of Biomedical Engineering, Northwestern University, Evanston, United States.,Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, United States.,Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, United States.,Department of Physiology, Northwestern University, Chicago, United States
| |
Collapse
|
16
|
Heald JB, Ingram JN, Flanagan JR, Wolpert DM. Multiple motor memories are learned to control different points on a tool. Nat Hum Behav 2018; 2:300-311. [PMID: 29736420 PMCID: PMC5935225 DOI: 10.1038/s41562-018-0324-5] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 02/20/2018] [Indexed: 01/09/2023]
Affiliation(s)
- James B Heald
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, UK.
| | - James N Ingram
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, UK
| | - J Randall Flanagan
- Center for Neuroscience Studies and Department of Psychology, Queen's University, Kingston, ON, Canada
| | - Daniel M Wolpert
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, UK
| |
Collapse
|
17
|
Sadnicka A, Kornysheva K, Rothwell JC, Edwards MJ. A unifying motor control framework for task-specific dystonia. Nat Rev Neurol 2018; 14:116-124. [PMID: 29104291 PMCID: PMC5975945 DOI: 10.1038/nrneurol.2017.146] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Task-specific dystonia is a movement disorder characterized by a painless loss of dexterity specific to a particular motor skill. This disorder is prevalent among writers, musicians, dancers and athletes. No current treatment is predictably effective, and the disorder generally ends the careers of affected individuals. Traditional disease models of dystonia have a number of limitations with regard to task-specific dystonia. We therefore discuss emerging evidence that the disorder has its origins within normal compensatory mechanisms of a healthy motor system in which the representation and reproduction of motor skill are disrupted. We describe how risk factors for task-specific dystonia can be stratified and translated into mechanisms of dysfunctional motor control. The proposed model aims to define new directions for experimental research and stimulate therapeutic advances for this highly disabling disorder.
Collapse
Affiliation(s)
- Anna Sadnicka
- Sobell Department for Motor Neuroscience, Institute of Neurology, University College London, 33 Queen Square, London WC1N 3BG, UK, and the Motor Control and movement Disorders Group, St George's University of London, Cranmer Terrace, Tooting, London SW17 0RE, UK
| | - Katja Kornysheva
- School of Psychology, Bangor University, Adeilad Brigantia, Penrallt Road, Gwynedd LL57 2AS, Wales, UK, and the Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AZ, UK
| | - John C Rothwell
- Sobell Department for Motor Neuroscience, Institute of Neurology, University College London, 33 Queen Square, London WC1N 3BG, UK
| | - Mark J Edwards
- Motor Control and Movement Disorders Group, St George's University of London, Cranmer Terrace, Tooting, London SW17 0RE, UK
| |
Collapse
|
18
|
Ingram JN, Sadeghi M, Flanagan JR, Wolpert DM. An error-tuned model for sensorimotor learning. PLoS Comput Biol 2017; 13:e1005883. [PMID: 29253869 PMCID: PMC5749863 DOI: 10.1371/journal.pcbi.1005883] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Revised: 01/02/2018] [Accepted: 11/17/2017] [Indexed: 01/05/2023] Open
Abstract
Current models of sensorimotor control posit that motor commands are generated by combining multiple modules which may consist of internal models, motor primitives or motor synergies. The mechanisms which select modules based on task requirements and modify their output during learning are therefore critical to our understanding of sensorimotor control. Here we develop a novel modular architecture for multi-dimensional tasks in which a set of fixed primitives are each able to compensate for errors in a single direction in the task space. The contribution of the primitives to the motor output is determined by both top-down contextual information and bottom-up error information. We implement this model for a task in which subjects learn to manipulate a dynamic object whose orientation can vary. In the model, visual information regarding the context (the orientation of the object) allows the appropriate primitives to be engaged. This top-down module selection is implemented by a Gaussian function tuned for the visual orientation of the object. Second, each module's contribution adapts across trials in proportion to its ability to decrease the current kinematic error. Specifically, adaptation is implemented by cosine tuning of primitives to the current direction of the error, which we show to be theoretically optimal for reducing error. This error-tuned model makes two novel predictions. First, interference should occur between alternating dynamics only when the kinematic errors associated with each oppose one another. In contrast, dynamics which lead to orthogonal errors should not interfere. Second, kinematic errors alone should be sufficient to engage the appropriate modules, even in the absence of contextual information normally provided by vision. We confirm both these predictions experimentally and show that the model can also account for data from previous experiments. Our results suggest that two interacting processes account for module selection during sensorimotor control and learning. Research in motor learning has focused on how we acquire new motor memories for novel situations. However, in many real world motor tasks, the challenge is to select appropriate memories for a given context. In such tasks, we are guided by two key types of information. First, contextual information from vision (for example) is available before we perform the task. Second, movement errors are available as we begin to perform the task. Here we present a model that provides a mechanism by which these two processes operate in parallel to enable us to tune and adapt our motor commands. We show that a model consisting of multiple simple modules, each of which can correct errors in a single direction only, can account for learning in multidimensional tasks. The model makes predictions about which tasks should interfere and how experience of errors alone without any contextual information can drive learning. We confirm these predictions in a series of experiments. The model provides a new framework for understanding the interaction between task context and error feedback during sensorimotor control and learning.
Collapse
Affiliation(s)
- James N Ingram
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, United Kingdom
| | - Mohsen Sadeghi
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, United Kingdom
| | - J Randall Flanagan
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - Daniel M Wolpert
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, United Kingdom
| |
Collapse
|
19
|
Kong G, Zhou Z, Wang Q, Kording K, Wei K. Credit assignment between body and object probed by an object transportation task. Sci Rep 2017; 7:13415. [PMID: 29042671 PMCID: PMC5645448 DOI: 10.1038/s41598-017-13889-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Accepted: 10/03/2017] [Indexed: 11/24/2022] Open
Abstract
It has been proposed that learning from movement errors involves a credit assignment problem: did I misestimate properties of the object or those of my body? For example, an overestimate of arm strength and an underestimate of the weight of a coffee cup can both lead to coffee spills. Though previous studies have found signs of simultaneous learning of the object and of the body during object manipulation, there is little behavioral evidence about their quantitative relation. Here we employed a novel weight-transportation task, in which participants lift the first cup filled with liquid while assessing their learning from errors. Specifically, we examined their transfer of learning when switching to a contralateral hand, the second identical cup, or switching both hands and cups. By comparing these transfer behaviors, we found that 25% of the learning was attributed to the object (simply because of the use of the same cup) and 58% of the learning was attributed to the body (simply because of the use of the same hand). The nervous system thus seems to partition the learning of object manipulation between the object and the body.
Collapse
Affiliation(s)
- Gaiqing Kong
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Beijing Key Laboratory of Behavior and Mental Health, Beijing, China.,Key Laboratory of Machine Perception, Ministry of Education, Beijing, China.,Department of Neuroscience & Department of Bioengineering, University of Pennsylvania, Pennsylvania, PA, USA
| | - Zhihao Zhou
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Beijing Key Laboratory of Behavior and Mental Health, Beijing, China.,Key Laboratory of Machine Perception, Ministry of Education, Beijing, China
| | - Qining Wang
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Beijing Key Laboratory of Behavior and Mental Health, Beijing, China.,Key Laboratory of Machine Perception, Ministry of Education, Beijing, China
| | - Konrad Kording
- Department of Neuroscience & Department of Bioengineering, University of Pennsylvania, Pennsylvania, PA, USA
| | - Kunlin Wei
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China. .,Beijing Key Laboratory of Behavior and Mental Health, Beijing, China. .,Key Laboratory of Machine Perception, Ministry of Education, Beijing, China. .,Peking-Tsinghua Center for Life Sciences, Beijing, 100080, China.
| |
Collapse
|
20
|
Valero-Cuevas FJ, Santello M. On neuromechanical approaches for the study of biological and robotic grasp and manipulation. J Neuroeng Rehabil 2017; 14:101. [PMID: 29017508 PMCID: PMC5635506 DOI: 10.1186/s12984-017-0305-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Accepted: 09/04/2017] [Indexed: 12/31/2022] Open
Abstract
Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank and open-minded assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas at the interface of neuromechanics, neuroscience, rehabilitation and robotics.
Collapse
Affiliation(s)
- Francisco J Valero-Cuevas
- Biomedical Engineering Department, University of Southern California, Los Angeles, CA, USA.
- Division of Biokinesiology & Physical Therapy, University of Southern California, Los Angeles, CA, USA.
| | - Marco Santello
- School of Biological and Health Systems Engineering Arizona State University, Tempe, AZ, USA
| |
Collapse
|
21
|
Envisioning the qualitative effects of robot manipulation actions using simulation-based projections. ARTIF INTELL 2017. [DOI: 10.1016/j.artint.2014.12.004] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
22
|
Takahashi C, Watt SJ. Optimal visual-haptic integration with articulated tools. Exp Brain Res 2017; 235:1361-1373. [PMID: 28214998 PMCID: PMC5380699 DOI: 10.1007/s00221-017-4896-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2016] [Accepted: 01/24/2017] [Indexed: 11/27/2022]
Abstract
When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.
Collapse
Affiliation(s)
- Chie Takahashi
- School of Computer Science, University of Birmingham, Birmingham, UK
| | - Simon J Watt
- Wolfson Centre for Cognitive Neuroscience, School of Psychology, Bangor University, Penrallt Rd., Bangor, Gwynedd, LL57 2AS, UK.
| |
Collapse
|
23
|
Unusual hand postures but not familiar tools show motor equivalence with precision grasping. Cognition 2016; 151:28-36. [DOI: 10.1016/j.cognition.2016.02.013] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Revised: 02/18/2016] [Accepted: 02/18/2016] [Indexed: 11/24/2022]
|
24
|
Individualistic weight perception from motion on a slope. Sci Rep 2016; 6:25432. [PMID: 27174036 PMCID: PMC4865871 DOI: 10.1038/srep25432] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2015] [Accepted: 04/18/2016] [Indexed: 11/08/2022] Open
Abstract
Perception of an object’s weight is linked to its form and motion. Studies have shown the relationship between weight perception and motion in horizontal and vertical environments to be universally identical across subjects during passive observation. Here we show a contradicting finding in that not all humans share the same motion-weight pairing. A virtual environment where participants control the steepness of a slope was used to investigate the relationship between sliding motion and weight perception. Our findings showed that distinct, albeit subjective, motion-weight relationships in perception could be identified for slope environments. These individualistic perceptions were found when changes in environmental parameters governing motion were introduced, specifically inclination and surface texture. Differences in environmental parameters, combined with individual factors such as experience, affected participants’ weight perception. This phenomenon may offer evidence of the central nervous system’s ability to choose and combine internal models based on information from the sensory system. The results also point toward the possibility of controlling human perception by presenting strong sensory cues to manipulate the mechanisms managing internal models.
Collapse
|
25
|
Lee-Miller T, Marneweck M, Santello M, Gordon AM. Visual Cues of Object Properties Differentially Affect Anticipatory Planning of Digit Forces and Placement. PLoS One 2016; 11:e0154033. [PMID: 27100830 PMCID: PMC4839688 DOI: 10.1371/journal.pone.0154033] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2016] [Accepted: 04/07/2016] [Indexed: 11/19/2022] Open
Abstract
Studies on anticipatory planning of object manipulation showed initial task failure (i.e., object roll) when visual object shape cues are incongruent with other visual cues, such as weight distribution/density (e.g., symmetrically shaped object with an asymmetrical density). This suggests that shape cues override density cues. However, these studies typically only measured forces, with digit placement constrained. Recent evidence suggests that when digit placement is unconstrained, subjects modulate digit forces and placement. Thus, unconstrained digit placement might be modulated on initial trials (since it is an explicit process), but not forces (since it is an implicit process). We tested whether shape and density cues would differentially influence anticipatory planning of digit placement and forces during initial trials of a two-digit object manipulation task. Furthermore, we tested whether shape cues would override density cues when cues are incongruent. Subjects grasped and lifted an object with the aim of preventing roll. In Experiment 1, the object was symmetrically shaped, but with asymmetrical density (incongruent cues). In Experiment 2, the object was asymmetrical in shape and density (congruent cues). In Experiment 3, the object was asymmetrically shaped, but with symmetrical density (incongruent cues). Results showed differential modulation of digit placement and forces (modulation of load force but not placement), but only when shape and density cues were congruent. When shape and density cues were incongruent, we found collinear digit placement and symmetrical force sharing. This suggests that congruent and incongruent shape and density cues differentially influence anticipatory planning of digit forces and placement. Furthermore, shape cues do not always override density cues. A continuum of visual cues, such as those alluding to shape and density, need to be integrated.
Collapse
Affiliation(s)
- Trevor Lee-Miller
- Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, New York, United States of America
| | - Michelle Marneweck
- Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, New York, United States of America
| | - Marco Santello
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, Arizona, United States of America
| | - Andrew M. Gordon
- Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
26
|
Parallel specification of competing sensorimotor control policies for alternative action options. Nat Neurosci 2016; 19:320-6. [PMID: 26752159 DOI: 10.1038/nn.4214] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2015] [Accepted: 12/03/2015] [Indexed: 11/08/2022]
Abstract
Recent theory proposes that the brain, when confronted with several action possibilities, prepares multiple competing movements before deciding among them. Psychophysical supporting evidence for this idea comes from the observation that when reaching towards multiple potential targets, the initial movement is directed towards the average location of the targets, consistent with multiple prepared reaches being executed simultaneously. However, reach planning involves far more than specifying movement direction; it requires the specification of a sensorimotor control policy that sets feedback gains shaping how the motor system responds to errors induced by noise or external perturbations. Here we found that, when a subject is reaching towards multiple potential targets, the feedback gain corresponds to an average of the gains specified when reaching to each target presented alone. Our findings provide evidence that the brain, when presented with multiple action options, computes multiple competing sensorimotor control policies in parallel before implementing one of them.
Collapse
|
27
|
Marneweck M, Knelange E, Lee-Miller T, Santello M, Gordon AM. Generalization of Dexterous Manipulation Is Sensitive to the Frame of Reference in Which It Is Learned. PLoS One 2015; 10:e0138258. [PMID: 26376089 PMCID: PMC4573321 DOI: 10.1371/journal.pone.0138258] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2015] [Accepted: 08/27/2015] [Indexed: 11/18/2022] Open
Abstract
Studies have shown that internal representations of manipulations of objects with asymmetric mass distributions that are generated within a specific orientation are not generalizable to novel orientations, i.e., subjects fail to prevent object roll on their first grasp-lift attempt of the object following 180° object rotation. This suggests that representations of these manipulations are specific to the reference frame in which they are formed. However, it is unknown whether that reference frame is specific to the hand, the body, or both, because rotating the object 180° modifies the relation between object and body as well as object and hand. An alternative, untested explanation for the above failure to generalize learned manipulations is that any rotation will disrupt grasp performance, regardless if the reference frame in which the manipulation was learned is maintained or modified. We examined the effect of rotations that (1) maintain and (2) modify relations between object and body, and object and hand, on the generalizability of learned two-digit manipulation of an object with an asymmetric mass distribution. Following rotations that maintained the relation between object and body and object and hand (e.g., rotating the object and subject 180°), subjects continued to use appropriate digit placement and load force distributions, thus generating sufficient compensatory moments to minimize object roll. In contrast, following rotations that modified the relation between (1) object and hand (e.g. rotating the hand around to the opposite object side), (2) object and body (e.g. rotating subject and hand 180°), or (3) both (e.g. rotating the subject 180°), subjects used the same, yet inappropriate digit placement and load force distribution, as those used prior to the rotation. Consequently, the compensatory moments were insufficient to prevent large object rolls. These findings suggest that representations of learned manipulation of objects with asymmetric mass distributions are specific to the body- and hand-reference frames in which they were learned.
Collapse
Affiliation(s)
- Michelle Marneweck
- Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, New York, United States of America
- * E-mail:
| | - Elisabeth Knelange
- MOVE Research Institute, Faculty of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, North Holland, The Netherlands
| | - Trevor Lee-Miller
- Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, New York, United States of America
| | - Marco Santello
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, Arizona, United States of America
| | - Andrew M. Gordon
- Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, New York, United States of America
| |
Collapse
|
28
|
Yeo SH, Wolpert DM, Franklin DW. Coordinate Representations for Interference Reduction in Motor Learning. PLoS One 2015; 10:e0129388. [PMID: 26067480 PMCID: PMC4466349 DOI: 10.1371/journal.pone.0129388] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Accepted: 05/07/2015] [Indexed: 11/18/2022] Open
Abstract
When opposing force fields are presented alternately or randomly across trials for identical reaching movements, subjects learn neither force field, a behavior termed ‘interference’. Studies have shown that a small difference in the endpoint posture of the limb reduces this interference. However, any difference in the limb’s endpoint location typically changes the hand position, joint angles and the hand orientation making it ambiguous as to which of these changes underlies the ability to learn dynamics that normally interfere. Here we examine the extent to which each of these three possible coordinate systems—Cartesian hand position, shoulder and elbow joint angles, or hand orientation—underlies the reduction in interference. Subjects performed goal-directed reaching movements in five different limb configurations designed so that different pairs of these configurations involved a change in only one coordinate system. By specifically assigning clockwise and counter-clockwise force fields to the configurations we could create three different conditions in which the direction of the force field could only be uniquely distinguished in one of the three coordinate systems. We examined the ability to learn the two fields based on each of the coordinate systems. The largest reduction of interference was observed when the field direction was linked to the hand orientation with smaller reductions in the other two conditions. This result demonstrates that the strongest reduction in interference occurred with changes in the hand orientation, suggesting that hand orientation may have a privileged role in reducing motor interference for changes in the endpoint posture of the limb.
Collapse
Affiliation(s)
- Sang-Hoon Yeo
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
- School of Sport, Exercise and Rehabilitation Sciences, University of Birmingham, Edgbaston, Birmingham, United Kingdom
- * E-mail:
| | - Daniel M. Wolpert
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - David W. Franklin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
29
|
Wolpert DM. Computations in Sensorimotor Learning. COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY 2015; 79:93-8. [PMID: 25851507 DOI: 10.1101/sqb.2014.79.024919] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Our cognitive abilities can only be expressed on the world through our actions. Here we review the computations underlying the way that the sensorimotor system converts both low-level sensory signals and high-level decisions into action, focusing on the behavioral evidence for the theoretical frameworks. We review recent work that determines how motor memories underlying sensorimotor learning are activated and protected from interference, the role of Bayesian decision theory in sensorimotor control including sources of suboptimality, the role of risk sensitivity in guiding action, and how rapid motor responses may underlie the robustness of the motor system to the vagaries of the world.
Collapse
Affiliation(s)
- Daniel M Wolpert
- Computational and Biological Learning, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
| |
Collapse
|
30
|
Fast but fleeting: adaptive motor learning processes associated with aging and cognitive decline. J Neurosci 2015; 34:13411-21. [PMID: 25274819 DOI: 10.1523/jneurosci.1489-14.2014] [Citation(s) in RCA: 72] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Motor learning has been shown to depend on multiple interacting learning processes. For example, learning to adapt when moving grasped objects with novel dynamics involves a fast process that adapts and decays quickly-and that has been linked to explicit memory-and a slower process that adapts and decays more gradually. Each process is characterized by a learning rate that controls how strongly motor memory is updated based on experienced errors and a retention factor determining the movement-to-movement decay in motor memory. Here we examined whether fast and slow motor learning processes involved in learning novel dynamics differ between younger and older adults. In addition, we investigated how age-related decline in explicit memory performance influences learning and retention parameters. Although the groups adapted equally well, they did so with markedly different underlying processes. Whereas the groups had similar fast processes, they had different slow processes. Specifically, the older adults exhibited decreased retention in their slow process compared with younger adults. Within the older group, who exhibited considerable variation in explicit memory performance, we found that poor explicit memory was associated with reduced retention in the fast process, as well as the slow process. These findings suggest that explicit memory resources are a determining factor in impairments in the both the fast and slow processes for motor learning but that aging effects on the slow process are independent of explicit memory declines.
Collapse
|
31
|
Abstract
Bayesian statistics defines how new information, given by a likelihood, should be combined with previously acquired information, given by a prior distribution. Many experiments have shown that humans make use of such priors in cognitive, perceptual, and motor tasks, but where do priors come from? As people never experience the same situation twice, they can only construct priors by generalizing from similar past experiences. Here we examine the generalization of priors over stochastic visuomotor perturbations in reaching experiments. In particular, we look into how the first two moments of the prior--the mean and variance (uncertainty)--generalize. We find that uncertainty appears to generalize differently from the mean of the prior, and an interesting asymmetry arises when the mean and the uncertainty are manipulated simultaneously.
Collapse
|
32
|
Berniker M, Mirzaei H, Kording KP. The effects of training breadth on motor generalization. J Neurophysiol 2014; 112:2791-8. [PMID: 25210163 DOI: 10.1152/jn.00615.2013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To generate new movements, we have to generalize what we have learned from previously practiced movements. An important question, therefore, is how the breadth of training affects generalization: does practicing a broad or narrow range of movements lead to better generalization? We address this question with a force field learning experiment. One group adapted while making many reaches in a small region (narrow group), and another group adapted while making reaches in a large region (broad group). Subsequently, both groups were tested for their ability to generalize without visual feedback. Not surprisingly, the narrow group exhibited smaller adaptation errors, yet they did not generalize any better than the broad group. Path errors during generalization were indistinguishable across the two groups, whereas the broad group exhibited reduced terminal errors. These findings indicate that overall, practicing a variety of movements is advantageous for performance during generalization; movement paths are not hindered, and terminal errors are superior. Moreover, the evidence suggests a dissociation between the ability to generalize information about a novel dynamic disturbance, which generalizes narrowly, and the ability to locate the limb accurately in space, which generalizes broadly.
Collapse
Affiliation(s)
- Max Berniker
- Rehabilitation Institute of Chicago and Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois; and
| | - Hamid Mirzaei
- Department of Computer Science, University of California, Irvine, California
| | - Konrad P Kording
- Rehabilitation Institute of Chicago and Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois; and
| |
Collapse
|
33
|
Monjo F, Forestier N. Unexperienced mechanical effects of muscular fatigue can be predicted by the Central Nervous System as revealed by anticipatory postural adjustments. Exp Brain Res 2014; 232:2931-43. [DOI: 10.1007/s00221-014-3975-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 04/22/2014] [Indexed: 01/04/2023]
|
34
|
Takahashi C, Watt SJ. Visual-haptic integration with pliers and tongs: signal "weights" take account of changes in haptic sensitivity caused by different tools. Front Psychol 2014; 5:109. [PMID: 24592245 PMCID: PMC3924038 DOI: 10.3389/fpsyg.2014.00109] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2013] [Accepted: 01/27/2014] [Indexed: 11/13/2022] Open
Abstract
When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the “weight” given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different “gains” between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices.
Collapse
Affiliation(s)
- Chie Takahashi
- Wolfson Centre for Cognitive Neuroscience, School of Psychology, Bangor University Bangor, UK ; Behavioural Brain Science Centre, School of Psychology, University of Birmingham Birmingham, UK
| | - Simon J Watt
- Wolfson Centre for Cognitive Neuroscience, School of Psychology, Bangor University Bangor, UK
| |
Collapse
|
35
|
Berniker M, Franklin DW, Flanagan JR, Wolpert DM, Kording K. Motor learning of novel dynamics is not represented in a single global coordinate system: evaluation of mixed coordinate representations and local learning. J Neurophysiol 2013; 111:1165-82. [PMID: 24353296 PMCID: PMC3949315 DOI: 10.1152/jn.00493.2013] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Successful motor performance requires the ability to adapt motor commands to task dynamics. A central question in movement neuroscience is how these dynamics are represented. Although it is widely assumed that dynamics (e.g., force fields) are represented in intrinsic, joint-based coordinates (Shadmehr R, Mussa-Ivaldi FA. J Neurosci 14: 3208-3224, 1994), recent evidence has questioned this proposal. Here we reexamine the representation of dynamics in two experiments. By testing generalization following changes in shoulder, elbow, or wrist configurations, the first experiment tested for extrinsic, intrinsic, or object-centered representations. No single coordinate frame accounted for the pattern of generalization. Rather, generalization patterns were better accounted for by a mixture of representations or by models that assumed local learning and graded, decaying generalization. A second experiment, in which we replicated the design of an influential study that had suggested encoding in intrinsic coordinates (Shadmehr and Mussa-Ivaldi 1994), yielded similar results. That is, we could not find evidence that dynamics are represented in a single coordinate system. Taken together, our experiments suggest that internal models do not employ a single coordinate system when generalizing and may well be represented as a mixture of coordinate systems, as a single system with local learning, or both.
Collapse
Affiliation(s)
- Max Berniker
- Rehabilitation Institute of Chicago, Northwestern University, Chicago, Illinois
| | | | | | | | | |
Collapse
|
36
|
Ingram JN, Flanagan JR, Wolpert DM. Context-dependent decay of motor memories during skill acquisition. Curr Biol 2013; 23:1107-12. [PMID: 23727092 PMCID: PMC3688072 DOI: 10.1016/j.cub.2013.04.079] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2013] [Revised: 03/28/2013] [Accepted: 04/30/2013] [Indexed: 11/25/2022]
Abstract
Current models of motor learning posit that skill acquisition involves both the formation and decay of multiple motor memories that can be engaged in different contexts [1–9]. Memory formation is assumed to be context dependent, so that errors most strongly update motor memories associated with the current context. In contrast, memory decay is assumed to be context independent, so that movement in any context leads to uniform decay across all contexts. We demonstrate that for both object manipulation and force-field adaptation, contrary to previous models, memory decay is highly context dependent. We show that the decay of memory associated with a given context is greatest for movements made in that context, with more distant contexts showing markedly reduced decay. Thus, both memory formation and decay are strongest for the current context. We propose that this apparently paradoxical organization provides a mechanism for optimizing performance. While memory decay tends to reduce force output [10, 11], memory formation can correct for any errors that arise, allowing the motor system to regulate force output so as to both minimize errors and avoid unnecessary energy expenditure. The motor commands for any given context thus result from a balance between memory formation and decay, while memories for other contexts are preserved. Motor learning involves the context-dependent formation of motor memories In contrast, memory decay has been assumed to be context independent We show memory decay is context dependent, being greatest for the current context The balance of formation and decay may be optimal for trading off skill and effort
Collapse
Affiliation(s)
- James N Ingram
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK.
| | | | | |
Collapse
|
37
|
Adaptation of lift forces in object manipulation through action observation. Exp Brain Res 2013; 228:221-34. [DOI: 10.1007/s00221-013-3554-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2012] [Accepted: 05/01/2013] [Indexed: 10/26/2022]
|
38
|
Separate contributions of kinematic and kinetic errors to trajectory and grip force adaptation when transporting novel hand-held loads. J Neurosci 2013; 33:2229-36. [PMID: 23365258 DOI: 10.1523/jneurosci.3772-12.2013] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Numerous studies of motor learning have examined the adaptation of hand trajectories and grip forces when moving grasped objects with novel dynamics. Such objects initially result in both kinematic and kinetic errors; i.e., mismatches between predicted and actual trajectories and between predicted and actual load forces. Here we investigated the contribution of these errors to both trajectory and grip force adaptation. Participants grasped an object with novel dynamics using a precision grip and moved it between two targets. Kinematic errors could be effectively removed using a force channel to constrain hand motion to a straight line. When moving in the channel, participants learned to modulate grip force in synchrony with load force and this learning generalized when movement speed in the channel was doubled. When the channel was removed, these participants continued to effectively modulate grip force but exhibited substantial kinematic errors, equivalent to those seen in participants who did not previously experience the object in the channel. We also found that the rate of grip force adaptation did not depend on whether the object was initially moved with or without a channel. These results indicate that kinematic errors are necessary for trajectory but not grip force adaptation, and that kinetic errors are sufficient for grip force but not trajectory adaptation. Thus, participants can learn a component of the object's dynamics, used to control grip force, based solely on kinetic errors. However, this knowledge is apparently not accessible or usable for controlling the movement trajectory when the channel is removed.
Collapse
|
39
|
Howard IS, Wolpert DM, Franklin DW. The effect of contextual cues on the encoding of motor memories. J Neurophysiol 2013; 109:2632-44. [PMID: 23446696 PMCID: PMC3653044 DOI: 10.1152/jn.00773.2012] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.
Collapse
Affiliation(s)
- Ian S Howard
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom.
| | | | | |
Collapse
|
40
|
Yan X, Wang Q, Lu Z, Stevenson IH, Körding K, Wei K. Generalization of unconstrained reaching with hand-weight changes. J Neurophysiol 2012; 109:137-46. [PMID: 23054601 DOI: 10.1152/jn.00498.2012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Studies of motor generalization usually perturb hand reaches by distorting visual feedback with virtual reality or by applying forces with a robotic manipulandum. Whereas such perturbations are useful for studying how the central nervous system adapts and generalizes to novel dynamics, they are rarely encountered in daily life. The most common perturbations that we experience are changes in the weights of objects that we hold. Here, we use a center-out, free-reaching task, in which we can manipulate the weight of a participant's hand to examine adaptation and generalization following naturalistic perturbations. In both trial-by-trial paradigms and block-based paradigms, we find that learning converges rapidly (on a timescale of approximately two trials), and this learning generalizes mostly to movements in nearby directions with a unimodal pattern. However, contrary to studies using more artificial perturbations, we find that the generalization has a strong global component. Furthermore, the generalization is enhanced with repeated exposure of the same perturbation. These results suggest that the familiarity of a perturbation is a major factor in movement generalization and that several theories of the neural control of movement, based on perturbations applied by robots or in virtual reality, may need to be extended by incorporating prior influence that is characterized by the familiarity of the perturbation.
Collapse
Affiliation(s)
- Xiang Yan
- Department of Psychology, Key Laboratory of Machine Perception, Ministry of Education, Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Peking University, Beijing, China
| | | | | | | | | | | |
Collapse
|
41
|
Baugh LA, Hoe E, Flanagan JR. Hand-held tools with complex kinematics are efficiently incorporated into movement planning and online control. J Neurophysiol 2012; 108:1954-64. [DOI: 10.1152/jn.00157.2012] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Certain hand-held tools alter the mapping between hand motion and motion of the tool end point that must be controlled in order to perform a task. For example, when using a pool cue, the motion of the cue tip is reversed relative to the hand. Previous studies have shown that the time required to initiate a reaching movement (Fernandez-Ruiz J, Wong W, Armstrong IT, Flanagan JR. Behav Brain Res 219: 8–14, 2011), or correct an ongoing reaching movement (Gritsenko V, Kalaska JF. J Neurophysiol 104: 3084–3104, 2010), is prolonged when the mapping between hand motion and motion of a cursor controlled by the hand is reversed. Here we show that these time costs can be significantly reduced when the reversal is instantiated by a virtual hand-held tool. Participants grasped the near end of a virtual tool, consisting of a rod connecting two circles, and moved the end point to displayed targets. In the reversal condition, the rod translated through, and rotated about, a pivot point such that there was a left-right reversal between hand and end point motion. In the nonreversal control, the tool translated with the hand. As expected, when only the two circles were presented, movement initiation and correction times were much longer in the reversal condition. However, when full vision of the tool was provided, the reaction time cost was almost eliminated. These results indicate that tools with complex kinematics can be efficiently incorporated into sensorimotor control mechanisms used in movement planning and online control.
Collapse
Affiliation(s)
- Lee A. Baugh
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; and
| | - Erica Hoe
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; and
| | - J. Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; and
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
42
|
Karl JM, Sacrey LAR, Doan JB, Whishaw IQ. Oral hapsis guides accurate hand preshaping for grasping food targets in the mouth. Exp Brain Res 2012; 221:223-40. [PMID: 22782480 DOI: 10.1007/s00221-012-3164-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2012] [Accepted: 06/21/2012] [Indexed: 10/28/2022]
Abstract
Preshaping the digits and orienting the hand when reaching to grasp a distal target is proposed to be optimal when guided by vision. A reach-to-grasp movement to an object in one's own mouth is a natural and commonly used movement, but there has been no previous description of how it is performed. The movement requires accuracy but likely depends upon haptic rather than visual guidance, leading to the question of whether the kinematics of this movement are similar to those with vision or whether the movement depends upon an alternate strategy. The present study used frame-by-frame video analysis and linear kinematics to analyze hand movements as participants reached for ethologically relevant food targets placed either at a distal location or in the mouth. When reaching for small and medium-sized food items (blueberries and donut balls) that had maximal lip-to-target contact, hand preshaping was equivalent to that used for visually guided reaching. When reaching for a large food item (orange slice) that extended beyond the edges of the mouth, hand preshaping was suboptimal compared to vision. Nevertheless, hapsis from the reaching hand was used to reshape and reorient the hand after first contact with the large target. The equally precise guidance of hand preshaping under oral hapsis is discussed in relation to the idea that hand preshaping, and its requisite neural circuitry, may have originated under somatosensory control, with secondary access by vision.
Collapse
Affiliation(s)
- Jenni M Karl
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge AB T1K 3M4, Canada.
| | | | | | | |
Collapse
|
43
|
Abstract
The present study was designed to determine whether manipulation learned with a set of digits can be transferred to grips involving a different number of digits, and possible mechanisms underlying such transfer. The goal of the task was to exert a torque and vertical forces on a visually symmetrical object at object lift onset to balance the external torque caused by asymmetrical mass distribution. Subjects learned this manipulation through consecutive practice using one grip type (two or three digits), after which they performed the same task but with another grip type (e.g., after adding or removing one digit, respectively). Subjects were able to switch grip type without compromising the behavioral outcome (i.e., the direction, timing, and magnitude of the torque exerted on the object was unchanged), despite the use of significantly different digit force-position coordination patterns in the two grip types. Our results support the transfer of learning for anticipatory control of manipulation and indicate that the CNS forms an internal model of the manipulation task independent of the effectors that are used to learn it. We propose that sensory information about the new digit placement--resulting from adding or removing a digit immediately after the switch in grip type--plays an important role in the accurate modulation of new digit force distributions. We discuss our results in relation to studies of manipulation reporting lack of learning transfer and propose a theoretical framework that accounts for failure or success of motor learning generalization.
Collapse
|
44
|
Abstract
The exploits of Martina Navratilova and Roger Federer represent the pinnacle of motor learning. However, when considering the range and complexity of the processes that are involved in motor learning, even the mere mortals among us exhibit abilities that are impressive. We exercise these abilities when taking up new activities - whether it is snowboarding or ballroom dancing - but also engage in substantial motor learning on a daily basis as we adapt to changes in our environment, manipulate new objects and refine existing skills. Here we review recent research in human motor learning with an emphasis on the computational mechanisms that are involved.
Collapse
|
45
|
Ingram JN, Howard IS, Flanagan JR, Wolpert DM. A single-rate context-dependent learning process underlies rapid adaptation to familiar object dynamics. PLoS Comput Biol 2011; 7:e1002196. [PMID: 21980277 PMCID: PMC3182866 DOI: 10.1371/journal.pcbi.1002196] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2010] [Accepted: 07/30/2011] [Indexed: 11/18/2022] Open
Abstract
Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process. Skillful object manipulation is an essential feature of human behavior. How humans process and represent information associated with objects is thus a fundamental question in neuroscience. Here, we examine the representation of the mechanical properties of objects which define the mapping between the forces applied to an object and the motion that results. Knowledge of this mapping, which can change depending on the orientation with which an object is grasped, is essential for skillful manipulation. Subjects performed a virtual object manipulation task by grasping the handle of a novel robotic interface which simulated the dynamics of a familiar object which could be presented at different orientations. Using this task, we show that adaptation to the properties of a particular object is extremely rapid, and that such adaptation is confined to the specific orientation at which the object is experienced. Moreover, the pattern of adaptation observed when the orientation of the object and its mechanical properties were changed from trial-to-trial was reproduced by a model which included multiple representations and a generalization function tuned for object orientation. These results suggest that the skillful manipulation of objects with familiar dynamics is mediated by multiple context-specific representations.
Collapse
Affiliation(s)
- James N Ingram
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom.
| | | | | | | |
Collapse
|
46
|
Parikh PJ, Cole KJ. Limited persistence of the sensorimotor memory when transferred across prehension tasks. Neurosci Lett 2011; 494:94-8. [PMID: 21371526 DOI: 10.1016/j.neulet.2011.02.066] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2010] [Revised: 01/25/2011] [Accepted: 02/23/2011] [Indexed: 11/17/2022]
Abstract
A recent study has shown that the sensorimotor memory for the fingertip forces used to grasp and lift an object can be shared across two prehension tasks. However, the persistence (or decay) of these memory resources is not known. Reports of within-task sensorimotor memory indicate persistence of lifting forces, with evidence for reduced persistence of grip forces. Here we investigated the temporal dynamics of the transfer of memory related to vertical lifting forces across prehension tasks. Young adult participants in two separate experimental groups first held the object placed on their palm and 'hefted' it (moved it up and down) followed by lifting the object using a precision grip (thumb-finger) with the dominant hand after a delay of 10 s, or 20 min. The Control group lifted the object with the dominant hand using a precision grip and then did so again 20 min later. The Control group used higher load force rates (LFR) for their first lift compared to subsequent lifts, both before and after the 20-min delay. This suggests that the Control group initially overestimated the weight of the object, corrected their LFRs, and then was able to retain this corrected force scaling after the 20-min delay. The Experimental 10-second delay group accurately scaled their LFRs upon their first lift, indicating that they obtained an accurate memory for LFR scaling during hefting, and transferred it to the lift task. In contrast, the Experimental 20-minute delay group was unable to scale their LFRs upon their first lift, as indicated by high LFRs that were no different than those of the Control group. Thus, the memory related to the production of LFR remained stable over 20 min when obtained from the same task, while that obtained from a different task decayed completely within 20 min. This decay may indicate weakened sensorimotor memories related to prehension forces due to its dependence not only on the memory for object mechanical properties, but also on sensory signals generated during the prehension act, along with strong visual prior estimates of a size-weight relationship.
Collapse
Affiliation(s)
- Pranav J Parikh
- Motor Control Laboratories, S.501Field House, The University of Iowa, Iowa City, IA 52242, USA.
| | | |
Collapse
|
47
|
Abstract
Human sensorimotor control has been predominantly studied using fixed tasks performed under laboratory conditions. This approach has greatly advanced our understanding of the mechanisms that integrate sensory information and generate motor commands during voluntary movement. However, experimental tasks necessarily restrict the range of behaviors that are studied. Moreover, the processes studied in the laboratory may not be the same processes that subjects call upon during their everyday lives. Naturalistic approaches thus provide an important adjunct to traditional laboratory-based studies. For example, wearable self-contained tracking systems can allow subjects to be monitored outside the laboratory, where they engage spontaneously in natural everyday behavior. Similarly, advances in virtual reality technology allow laboratory-based tasks to be made more naturalistic. Here, we review naturalistic approaches, including perspectives from psychology and visual neuroscience, as well as studies and technological advances in the field of sensorimotor control.
Collapse
Affiliation(s)
- James N Ingram
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom.
| | | |
Collapse
|
48
|
Wolpert DM, Flanagan JR. Q&A: Robotics as a tool to understand the brain. BMC Biol 2010; 8:92. [PMID: 20659354 PMCID: PMC2909176 DOI: 10.1186/1741-7007-8-92] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2010] [Accepted: 06/28/2010] [Indexed: 11/11/2022] Open
|