1
|
Nisky I, Makin TR. A neurocognitive pathway for engineering artificial touch. SCIENCE ADVANCES 2024; 10:eadq6290. [PMID: 39693427 PMCID: PMC11654688 DOI: 10.1126/sciadv.adq6290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Accepted: 11/14/2024] [Indexed: 12/20/2024]
Abstract
Artificial haptics has the potential to revolutionize the way we integrate physical and virtual technologies in our daily lives, with implications for teleoperation, motor skill acquisition, rehabilitation, gaming, interpersonal communication, and beyond. Here, we delve into the intricate interplay between the somatosensory system and engineered haptic inputs for perception and action. We critically examine the sensory feedback's fidelity and the cognitive demands of interfacing with these systems. We examine how artificial touch interfaces could be redesigned to better align with human sensory, motor, and cognitive systems, emphasizing the dynamic and context-dependent nature of sensory integration. We consider the various learning processes involved in adapting to artificial haptics, highlighting the need for interfaces that support both explicit and implicit learning mechanisms. We emphasize the need for technologies that are not only physiologically biomimetic but also behaviorally and cognitively congruent with the user, affording a range of alternative solutions to users' needs.
Collapse
Affiliation(s)
- Ilana Nisky
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer Sheva, Israel
- The School of Brain Sciences and Cognition, Ben-Gurion University of the Negev, Israel
| | - Tamar R. Makin
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| |
Collapse
|
2
|
Dominijanni G, Pinheiro DL, Pollina L, Orset B, Gini M, Anselmino E, Pierella C, Olivier J, Shokur S, Micera S. Human motor augmentation with an extra robotic arm without functional interference. Sci Robot 2023; 8:eadh1438. [PMID: 38091424 DOI: 10.1126/scirobotics.adh1438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 11/15/2023] [Indexed: 12/18/2023]
Abstract
Extra robotic arms (XRAs) are gaining interest in neuroscience and robotics, offering potential tools for daily activities. However, this compelling opportunity poses new challenges for sensorimotor control strategies and human-machine interfaces (HMIs). A key unsolved challenge is allowing users to proficiently control XRAs without hindering their existing functions. To address this, we propose a pipeline to identify suitable HMIs given a defined task to accomplish with the XRA. Following such a scheme, we assessed a multimodal motor HMI based on gaze detection and diaphragmatic respiration in a purposely designed modular neurorobotic platform integrating virtual reality and a bilateral upper limb exoskeleton. Our results show that the proposed HMI does not interfere with speaking or visual exploration and that it can be used to control an extra virtual arm independently from the biological ones or in coordination with them. Participants showed significant improvements in performance with daily training and retention of learning, with no further improvements when artificial haptic feedback was provided. As a final proof of concept, naïve and experienced participants used a simplified version of the HMI to control a wearable XRA. Our analysis indicates how the presented HMI can be effectively used to control XRAs. The observation that experienced users achieved a success rate 22.2% higher than that of naïve users, combined with the result that naïve users showed average success rates of 74% when they first engaged with the system, endorses the viability of both the virtual reality-based testing and training and the proposed pipeline.
Collapse
Affiliation(s)
- Giulia Dominijanni
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Daniel Leal Pinheiro
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Neuroengineering and Neurocognition Laboratory, Escola Paulista de Medicina, Department of Neurology and Neurosurgery, Division of Neuroscience, Universidade Federal de São Paulo, São Paulo, Brazil
| | - Leonardo Pollina
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Bastien Orset
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Martina Gini
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
- Neuroelectronic Interfaces, Faculty of Electrical Engineering and IT, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Aachen 52074, Germany
| | - Eugenio Anselmino
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Camilla Pierella
- Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, and Maternal and Children's Sciences (DINOGMI), University of Genoa, Genoa, Italy
| | - Jérémy Olivier
- Institute for Industrial Sciences and Technologies, Haute Ecole du Paysage, d'Ingénierie et d'Architecture (HEPIA), HES-SO University of Applied Sciences and Arts Western Switzerland, Geneva, Switzerland
| | - Solaiman Shokur
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Silvestro Micera
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
3
|
Pinardi M, Di Stefano N, Di Pino G, Spence C. Exploring crossmodal correspondences for future research in human movement augmentation. Front Psychol 2023; 14:1190103. [PMID: 37397340 PMCID: PMC10308310 DOI: 10.3389/fpsyg.2023.1190103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 07/04/2023] Open
Abstract
"Crossmodal correspondences" are the consistent mappings between perceptual dimensions or stimuli from different sensory domains, which have been widely observed in the general population and investigated by experimental psychologists in recent years. At the same time, the emerging field of human movement augmentation (i.e., the enhancement of an individual's motor abilities by means of artificial devices) has been struggling with the question of how to relay supplementary information concerning the state of the artificial device and its interaction with the environment to the user, which may help the latter to control the device more effectively. To date, this challenge has not been explicitly addressed by capitalizing on our emerging knowledge concerning crossmodal correspondences, despite these being tightly related to multisensory integration. In this perspective paper, we introduce some of the latest research findings on the crossmodal correspondences and their potential role in human augmentation. We then consider three ways in which the former might impact the latter, and the feasibility of this process. First, crossmodal correspondences, given the documented effect on attentional processing, might facilitate the integration of device status information (e.g., concerning position) coming from different sensory modalities (e.g., haptic and visual), thus increasing their usefulness for motor control and embodiment. Second, by capitalizing on their widespread and seemingly spontaneous nature, crossmodal correspondences might be exploited to reduce the cognitive burden caused by additional sensory inputs and the time required for the human brain to adapt the representation of the body to the presence of the artificial device. Third, to accomplish the first two points, the benefits of crossmodal correspondences should be maintained even after sensory substitution, a strategy commonly used when implementing supplementary feedback.
Collapse
Affiliation(s)
- Mattia Pinardi
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Giovanni Di Pino
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
4
|
Amoruso E, Dowdall L, Kollamkulam MT, Ukaegbu O, Kieliba P, Ng T, Dempsey-Jones H, Clode D, Makin TR. Intrinsic somatosensory feedback supports motor control and learning to operate artificial body parts. J Neural Eng 2022; 19:016006. [PMID: 34983040 PMCID: PMC10431236 DOI: 10.1088/1741-2552/ac47d9] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 12/09/2021] [Accepted: 01/04/2022] [Indexed: 11/11/2022]
Abstract
Objective.Considerable resources are being invested to enhance the control and usability of artificial limbs through the delivery of unnatural forms of somatosensory feedback. Here, we investigated whether intrinsic somatosensory information from the body part(s) remotely controlling an artificial limb can be leveraged by the motor system to support control and skill learning.Approach.We used local anaesthetic to attenuate somatosensory inputs to the big toes while participants learned to operate through pressure sensors a toe-controlled and hand-worn robotic extra finger. Motor learning outcomes were compared against a control group who received sham anaesthetic and quantified in three different task scenarios: while operating in isolation from, in synchronous coordination, and collaboration with, the biological fingers.Main results.Both groups were able to learn to operate the robotic extra finger, presumably due to abundance of visual feedback and other relevant sensory cues. Importantly, the availability of displaced somatosensory cues from the distal bodily controllers facilitated the acquisition of isolated robotic finger movements, the retention and transfer of synchronous hand-robot coordination skills, and performance under cognitive load. Motor performance was not impaired by toes anaesthesia when tasks involved close collaboration with the biological fingers, indicating that the motor system can close the sensory feedback gap by dynamically integrating task-intrinsic somatosensory signals from multiple, and even distal, body-parts.Significance.Together, our findings demonstrate that there are multiple natural avenues to provide intrinsic surrogate somatosensory information to support motor control of an artificial body part, beyond artificial stimulation.
Collapse
Affiliation(s)
- E Amoruso
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - L Dowdall
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - M T Kollamkulam
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - O Ukaegbu
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- East London NHS Foundation Trust, London, United Kingdom
| | - P Kieliba
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - T Ng
- Royal Free London NHS Foundation Trust, London, United Kingdom
| | - H Dempsey-Jones
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - D Clode
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - T R Makin
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
5
|
Dominijanni G, Shokur S, Salvietti G, Buehler S, Palmerini E, Rossi S, De Vignemont F, d’Avella A, Makin TR, Prattichizzo D, Micera S. The neural resource allocation problem when enhancing human bodies with extra robotic limbs. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00398-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
6
|
Luo J, Gong Z, Su Y, Ruan L, Zhao Y, Asada HH, Fu C. Modeling and Balance Control of Supernumerary Robotic Limb for Overhead Tasks. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3067850] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
Song H, Asada HH. Integrated Voluntary-Reactive Control of a Human-SuperLimb Hybrid System for Hemiplegic Patient Support. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3058926] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|