1
|
Murdison TS, Standage DI, Lefèvre P, Blohm G. Effector-dependent stochastic reference frame transformations alter decision-making. J Vis 2022; 22:1. [PMID: 35816048 PMCID: PMC9284468 DOI: 10.1167/jov.22.8.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Psychophysical, motor control, and modeling studies have revealed that sensorimotor reference frame transformations (RFTs) add variability to transformed signals. For perceptual decision-making, this phenomenon could decrease the fidelity of a decision signal's representation or alternatively improve its processing through stochastic facilitation. We investigated these two hypotheses under various sensorimotor RFT constraints. Participants performed a time-limited, forced-choice motion discrimination task under eight combinations of head roll and/or stimulus rotation while responding either with a saccade or button press. This paradigm, together with the use of a decision model, allowed us to parameterize and correlate perceptual decision behavior with eye-, head-, and shoulder-centered sensory and motor reference frames. Misalignments between sensory and motor reference frames produced systematic changes in reaction time and response accuracy. For some conditions, these changes were consistent with a degradation of motion evidence commensurate with a decrease in stimulus strength in our model framework. Differences in participant performance were explained by a continuum of eye–head–shoulder representations of accumulated motion evidence, with an eye-centered bias during saccades and a shoulder-centered bias during button presses. In addition, we observed evidence for stochastic facilitation during head-rolled conditions (i.e., head roll resulted in faster, more accurate decisions in oblique motion for a given stimulus–response misalignment). We show that perceptual decision-making and stochastic RFTs are inseparable within the present context. We show that by simply rolling one's head, perceptual decision-making is altered in a way that is predicted by stochastic RFTs.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,
| | - Dominic I Standage
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,School of Psychology, University of Birmingham, UK.,
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium.,
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,
| |
Collapse
|
2
|
Hadjidimitrakis K, Ghodrati M, Breveglieri R, Rosa MGP, Fattori P. Neural coding of action in three dimensions: Task- and time-invariant reference frames for visuospatial and motor-related activity in parietal area V6A. J Comp Neurol 2020; 528:3108-3122. [PMID: 32080849 DOI: 10.1002/cne.24889] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 01/14/2020] [Accepted: 02/10/2020] [Indexed: 12/13/2022]
Abstract
Goal-directed movements involve a series of neural computations that compare the sensory representations of goal location and effector position, and transform these into motor commands. Neurons in posterior parietal cortex (PPC) control several effectors (e.g., eye, hand, foot) and encode goal location in a variety of spatial coordinate systems, including those anchored to gaze direction, and to the positions of the head, shoulder, or hand. However, there is little evidence on whether reference frames depend also on the effector and/or type of motor response. We addressed this issue in macaque PPC area V6A, where previous reports using a fixate-to-reach in depth task, from different starting arm positions, indicated that most units use mixed body/hand-centered coordinates. Here, we applied singular value decomposition and gradient analyses to characterize the reference frames in V6A while the animals, instead of arm reaching, performed a nonspatial motor response (hand lift). We found that most neurons used mixed body/hand coordinates, instead of "pure" body-, or hand-centered coordinates. During the task progress the effect of hand position on activity became stronger compared to target location. Activity consistent with body-centered coding was present only in a subset of neurons active early in the task. Applying the same analyses to a population of V6A neurons recorded during the fixate-to-reach task yielded similar results. These findings suggest that V6A neurons use consistent reference frames between spatial and nonspatial motor responses, a functional property that may allow the integration of spatial awareness and movement control.
Collapse
Affiliation(s)
- Kostas Hadjidimitrakis
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy.,Department of Physiology and Biomedicine Discovery Institute, Monash University, Clayton, Victoria, Australia
| | - Masoud Ghodrati
- Department of Physiology and Biomedicine Discovery Institute, Monash University, Clayton, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain function, Monash University, Clayton, Victoria, Australia
| | - Rossella Breveglieri
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Marcello G P Rosa
- Department of Physiology and Biomedicine Discovery Institute, Monash University, Clayton, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain function, Monash University, Clayton, Victoria, Australia
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
3
|
Blohm G, Alikhanian H, Gaetz W, Goltz H, DeSouza J, Cheyne D, Crawford J. Neuromagnetic signatures of the spatiotemporal transformation for manual pointing. Neuroimage 2019; 197:306-319. [DOI: 10.1016/j.neuroimage.2019.04.074] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 03/28/2019] [Accepted: 04/27/2019] [Indexed: 11/29/2022] Open
|
4
|
Pugach G, Pitti A, Tolochko O, Gaussier P. Brain-Inspired Coding of Robot Body Schema Through Visuo-Motor Integration of Touched Events. Front Neurorobot 2019; 13:5. [PMID: 30899217 PMCID: PMC6416207 DOI: 10.3389/fnbot.2019.00005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Accepted: 02/06/2019] [Indexed: 11/13/2022] Open
Abstract
Representing objects in space is difficult because sensorimotor events are anchored in different reference frames, which can be either eye-, arm-, or target-centered. In the brain, Gain-Field (GF) neurons in the parietal cortex are involved in computing the necessary spatial transformations for aligning the tactile, visual and proprioceptive signals. In reaching tasks, these GF neurons exploit a mechanism based on multiplicative interaction for binding simultaneously touched events from the hand with visual and proprioception information.By doing so, they can infer new reference frames to represent dynamically the location of the body parts in the visual space (i.e., the body schema) and nearby targets (i.e., its peripersonal space). In this line, we propose a neural model based on GF neurons for integrating tactile events with arm postures and visual locations for constructing hand- and target-centered receptive fields in the visual space. In robotic experiments using an artificial skin, we show how our neural architecture reproduces the behaviors of parietal neurons (1) for encoding dynamically the body schema of our robotic arm without any visual tags on it and (2) for estimating the relative orientation and distance of targets to it. We demonstrate how tactile information facilitates the integration of visual and proprioceptive signals in order to construct the body space.
Collapse
Affiliation(s)
- Ganna Pugach
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Alexandre Pitti
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Olga Tolochko
- Faculty of Electric Power Engineering and Automation, National Technical University of Ukraine Kyiv Polytechnic Institute, Kyiv, Ukraine
| | - Philippe Gaussier
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| |
Collapse
|
5
|
Bosco A, Piserchia V, Fattori P. Multiple Coordinate Systems and Motor Strategies for Reaching Movements When Eye and Hand Are Dissociated in Depth and Direction. Front Hum Neurosci 2017; 11:323. [PMID: 28690504 PMCID: PMC5481402 DOI: 10.3389/fnhum.2017.00323] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2016] [Accepted: 06/06/2017] [Indexed: 11/13/2022] Open
Abstract
Reaching behavior represents one of the basic aspects of human cognitive abilities important for the interaction with the environment. Reaching movements towards visual objects are controlled by mechanisms based on coordinate systems that transform the spatial information of target location into appropriate motor response. Although recent works have extensively studied the encoding of target position for reaching in three-dimensional space at behavioral level, the combined analysis of reach errors and movement variability has so far been investigated by few studies. Here we did so by testing 12 healthy participants in an experiment where reaching targets were presented at different depths and directions in foveal and peripheral viewing conditions. Each participant executed a memory-guided task in which he/she had to reach the memorized position of the target. A combination of vector and gradient analysis, novel for behavioral data, was applied to analyze patterns of reach errors for different combinations of eye/target positions. The results showed reach error patterns based on both eye- and space-centered coordinate systems: in depth more biased towards a space-centered representation and in direction mixed between space- and eye-centered representation. We calculated movement variability to describe different trajectory strategies adopted by participants while reaching to the different eye/target configurations tested. In direction, the distribution of variability between configurations that shared the same eye/target relative configuration was different, whereas in configurations that shared the same spatial position of targets, it was similar. In depth, the variability showed more similar distributions in both pairs of eye/target configurations tested. These results suggest that reaching movements executed in geometries that require hand and eye dissociations in direction and depth showed multiple coordinate systems and different trajectory strategies according to eye/target configurations and the two dimensions of space.
Collapse
Affiliation(s)
- Annalisa Bosco
- Department of Pharmacy and Biotechnology, University of BolognaBologna, Italy
| | - Valentina Piserchia
- Department of Pharmacy and Biotechnology, University of BolognaBologna, Italy
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology, University of BolognaBologna, Italy
| |
Collapse
|
6
|
Piserchia V, Breveglieri R, Hadjidimitrakis K, Bertozzi F, Galletti C, Fattori P. Mixed Body/Hand Reference Frame for Reaching in 3D Space in Macaque Parietal Area PEc. Cereb Cortex 2017; 27:1976-1990. [PMID: 26941385 DOI: 10.1093/cercor/bhw039] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The neural correlates of coordinate transformations from vision to action are expressed in the activity of posterior parietal cortex (PPC). It has been demonstrated that among the medial-most areas of the PPC, reaching targets are represented mainly in hand-centered coordinates in area PE, and in eye-centered, body-centered, and mixed body/hand-centered coordinates in area V6A. Here, we assessed whether neurons of area PEc, located between V6A and PE in the medial PPC, encode targets in body-centered, hand-centered, or mixed frame of reference during planning and execution of reaching. We studied 104 PEc cells in 3 Macaca fascicularis. The animals performed a reaching task toward foveated targets located at different depths and directions in darkness, starting with the hand from 2 positions located at different depths, one next to the trunk and the other far from it. We show that most PEc neurons encoded targets in a mixed body/hand-centered frame of reference. Although the effect of hand position was often rather strong, it was not as strong as reported previously in area PE. Our results suggest that area PEc represents an intermediate node in the gradual transformation from vision to action that takes place in the reaching network of the dorsomedial PPC.
Collapse
Affiliation(s)
- Valentina Piserchia
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Rossella Breveglieri
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Kostas Hadjidimitrakis
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy.,Department of Physiology, Monash University, Clayton, Victoria 3800, Australia
| | - Federica Bertozzi
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| |
Collapse
|
7
|
Hadjidimitrakis K, Bertozzi F, Breveglieri R, Galletti C, Fattori P. Temporal stability of reference frames in monkey area V6A during a reaching task in 3D space. Brain Struct Funct 2016; 222:1959-1970. [DOI: 10.1007/s00429-016-1319-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Accepted: 09/26/2016] [Indexed: 10/20/2022]
|
8
|
Dowiasch S, Blohm G, Bremmer F. Neural correlate of spatial (mis-)localization during smooth eye movements. Eur J Neurosci 2016; 44:1846-55. [PMID: 27177769 PMCID: PMC5089592 DOI: 10.1111/ejn.13276] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2015] [Accepted: 04/19/2016] [Indexed: 11/29/2022]
Abstract
The dependence of neuronal discharge on the position of the eyes in the orbit is a functional characteristic of many visual cortical areas of the macaque. It has been suggested that these eye-position signals provide relevant information for a coordinate transformation of visual signals into a non-eye-centered frame of reference. This transformation could be an integral part for achieving visual perceptual stability across eye movements. Previous studies demonstrated close to veridical eye-position decoding during stable fixation as well as characteristic erroneous decoding across saccadic eye-movements. Here we aimed to decode eye position during smooth pursuit. We recorded neural activity in macaque area VIP during steady fixation, saccades and smooth-pursuit and investigated the temporal and spatial accuracy of eye position as decoded from the neuronal discharges. Confirming previous results, the activity of the majority of neurons depended linearly on horizontal and vertical eye position. The application of a previously introduced computational approach (isofrequency decoding) allowed eye position decoding with considerable accuracy during steady fixation. We applied the same decoder on the activity of the same neurons during smooth-pursuit. On average, the decoded signal was leading the current eye position. A model combining this constant lead of the decoded eye position with a previously described attentional bias ahead of the pursuit target describes the asymmetric mislocalization pattern for briefly flashed stimuli during smooth pursuit eye movements as found in human behavioral studies.
Collapse
Affiliation(s)
- Stefan Dowiasch
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| | | | - Frank Bremmer
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| |
Collapse
|
9
|
Reference frames for reaching when decoupling eye and target position in depth and direction. Sci Rep 2016; 6:21646. [PMID: 26876496 PMCID: PMC4753502 DOI: 10.1038/srep21646] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 01/28/2016] [Indexed: 11/23/2022] Open
Abstract
Spatial representations in cortical areas involved in reaching movements were traditionally studied in a frontoparallel plane where the two-dimensional target location and the movement direction were the only variables to consider in neural computations. No studies so far have characterized the reference frames for reaching considering both depth and directional signals. Here we recorded from single neurons of the medial posterior parietal area V6A during a reaching task where fixation point and reaching targets were decoupled in direction and depth. We found a prevalent mixed encoding of target position, with eye-centered and spatiotopic representations differently balanced in the same neuron. Depth was stronger in defining the reference frame of eye-centered cells, while direction was stronger in defining that of spatiotopic cells. The predominant presence of various typologies of mixed encoding suggests that depth and direction signals are processed on the basis of flexible coordinate systems to ensure optimal motor response.
Collapse
|
10
|
Lehky SR, Sereno ME, Sereno AB. Characteristics of Eye-Position Gain Field Populations Determine Geometry of Visual Space. Front Integr Neurosci 2016; 9:72. [PMID: 26834587 PMCID: PMC4718998 DOI: 10.3389/fnint.2015.00072] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Accepted: 12/21/2015] [Indexed: 11/17/2022] Open
Abstract
We have previously demonstrated differences in eye-position spatial maps for anterior inferotemporal cortex (AIT) in the ventral stream and lateral intraparietal cortex (LIP) in the dorsal stream, based on population decoding of gaze angle modulations of neural visual responses (i.e., eye-position gain fields). Here we explore the basis of such spatial encoding differences through modeling of gain field characteristics. We created a population of model neurons, each having a different eye-position gain field. This population was used to reconstruct eye-position visual space using multidimensional scaling. As gain field shapes have never been well-established experimentally, we examined different functions, including planar, sigmoidal, elliptical, hyperbolic, and mixtures of those functions. All functions successfully recovered positions, indicating weak constraints on allowable gain field shapes. We then used a genetic algorithm to modify the characteristics of model gain field populations until the recovered spatial maps closely matched those derived from monkey neurophysiological data in AIT and LIP. The primary differences found between model AIT and LIP gain fields were that AIT gain fields were more foveally dominated. That is, gain fields in AIT operated on smaller spatial scales and smaller dispersions than in LIP. Thus, we show that the geometry of eye-position visual space depends on the population characteristics of gain fields, and that differences in gain field characteristics for different cortical areas may underlie differences in the representation of space.
Collapse
Affiliation(s)
- Sidney R Lehky
- Computational Neurobiology Laboratory, The Salk Institute La Jolla, CA, USA
| | | | - Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Medical School Houston, TX, USA
| |
Collapse
|
11
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 PMCID: PMC4346721 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2014] [Accepted: 12/01/2014] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
12
|
Hadjidimitrakis K, Bertozzi F, Breveglieri R, Fattori P, Galletti C. Body-centered, mixed, but not hand-centered coding of visual targets in the medial posterior parietal cortex during reaches in 3D space. Cereb Cortex 2013; 24:3209-20. [PMID: 23853212 DOI: 10.1093/cercor/bht181] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The frames of reference used by neurons in posterior parietal cortex (PPC) to encode spatial locations during arm reaching movements is a debated topic in modern neurophysiology. Traditionally, target location, encoded in retinocentric reference frame (RF) in caudal PPC, was assumed to be serially transformed to body-centered and then hand-centered coordinates rostrally. However, recent studies suggest that these transformations occur within a single area. The caudal PPC area V6A has been shown to represent reach targets in eye-centered, body-centered, and a combination of both RFs, but the presence of hand-centered coding has not been yet investigated. To examine this issue, 141 single neurons were recorded from V6A in 2 Macaca fascicularis monkeys while they performed a foveated reaching task in darkness. The targets were presented at different distances and lateralities from the body and were reached from initial hand positions located at different depths. Most V6A cells used body-centered, or mixed body- and hand-centered coordinates. Only a few neurons used pure hand-centered coordinates, thus clearly distinguishing V6A from nearby PPC regions. Our findings support the view of a gradual RF transformation in PPC and also highlight the impact of mixed frames of reference.
Collapse
Affiliation(s)
- K Hadjidimitrakis
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| | - F Bertozzi
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| | - R Breveglieri
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| | - P Fattori
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| | - C Galletti
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| |
Collapse
|
13
|
Leclercq G, Lefèvre P, Blohm G. 3D kinematics using dual quaternions: theory and applications in neuroscience. Front Behav Neurosci 2013; 7:7. [PMID: 23443667 PMCID: PMC3576712 DOI: 10.3389/fnbeh.2013.00007] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2012] [Accepted: 01/28/2013] [Indexed: 12/02/2022] Open
Abstract
In behavioral neuroscience, many experiments are developed in 1 or 2 spatial dimensions, but when scientists tackle problems in 3-dimensions (3D), they often face problems or new challenges. Results obtained for lower dimensions are not always extendable in 3D. In motor planning of eye, gaze or arm movements, or sensorimotor transformation problems, the 3D kinematics of external (stimuli) or internal (body parts) must often be considered: how to describe the 3D position and orientation of these objects and link them together? We describe how dual quaternions provide a convenient way to describe the 3D kinematics for position only (point transformation) or for combined position and orientation (through line transformation), easily modeling rotations, translations or screw motions or combinations of these. We also derive expressions for the velocities of points and lines as well as the transformation velocities. Then, we apply these tools to a motor planning task for manual tracking and to the modeling of forward and inverse kinematics of a seven-dof three-link arm to show the interest of dual quaternions as a tool to build models for these kinds of applications.
Collapse
Affiliation(s)
- Guillaume Leclercq
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université Catholique de Louvain Louvain-la-Neuve, Belgium ; Institute of Neuroscience, Université Catholique de Louvain Brussels, Belgium
| | | | | |
Collapse
|