1
|
Fischer-Janzen A, Wendt TM, Van Laerhoven K. A scoping review of gaze and eye tracking-based control methods for assistive robotic arms. Front Robot AI 2024; 11:1326670. [PMID: 38440775 PMCID: PMC10909843 DOI: 10.3389/frobt.2024.1326670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 01/29/2024] [Indexed: 03/06/2024] Open
Abstract
Background: Assistive Robotic Arms are designed to assist physically disabled people with daily activities. Existing joysticks and head controls are not applicable for severely disabled people such as people with Locked-in Syndrome. Therefore, eye tracking control is part of ongoing research. The related literature spans many disciplines, creating a heterogeneous field that makes it difficult to gain an overview. Objectives: This work focuses on ARAs that are controlled by gaze and eye movements. By answering the research questions, this paper provides details on the design of the systems, a comparison of input modalities, methods for measuring the performance of these controls, and an outlook on research areas that gained interest in recent years. Methods: This review was conducted as outlined in the PRISMA 2020 Statement. After identifying a wide range of approaches in use the authors decided to use the PRISMA-ScR extension for a scoping review to present the results. The identification process was carried out by screening three databases. After the screening process, a snowball search was conducted. Results: 39 articles and 6 reviews were included in this article. Characteristics related to the system and study design were extracted and presented divided into three groups based on the use of eye tracking. Conclusion: This paper aims to provide an overview for researchers new to the field by offering insight into eye tracking based robot controllers. We have identified open questions that need to be answered in order to provide people with severe motor function loss with systems that are highly useable and accessible.
Collapse
Affiliation(s)
- Anke Fischer-Janzen
- Faculty Economy, Work-Life Robotics Institute, University of Applied Sciences Offenburg, Offenburg, Germany
| | - Thomas M. Wendt
- Faculty Economy, Work-Life Robotics Institute, University of Applied Sciences Offenburg, Offenburg, Germany
| | - Kristof Van Laerhoven
- Ubiquitous Computing, Department of Electrical Engineering and Computer Science, University of Siegen, Siegen, Germany
| |
Collapse
|
2
|
Eden J, Bräcklein M, Ibáñez J, Barsakcioglu DY, Di Pino G, Farina D, Burdet E, Mehring C. Principles of human movement augmentation and the challenges in making it a reality. Nat Commun 2022; 13:1345. [PMID: 35292665 PMCID: PMC8924218 DOI: 10.1038/s41467-022-28725-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Accepted: 02/04/2022] [Indexed: 12/23/2022] Open
Abstract
Augmenting the body with artificial limbs controlled concurrently to one's natural limbs has long appeared in science fiction, but recent technological and neuroscientific advances have begun to make this possible. By allowing individuals to achieve otherwise impossible actions, movement augmentation could revolutionize medical and industrial applications and profoundly change the way humans interact with the environment. Here, we construct a movement augmentation taxonomy through what is augmented and how it is achieved. With this framework, we analyze augmentation that extends the number of degrees-of-freedom, discuss critical features of effective augmentation such as physiological control signals, sensory feedback and learning as well as application scenarios, and propose a vision for the field.
Collapse
Affiliation(s)
- Jonathan Eden
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK
| | - Mario Bräcklein
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK
| | - Jaime Ibáñez
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK
- BSICoS, IIS Aragón, Universidad de Zaragoza, Zaragoza, Spain
- Department of Clinical and Movement Neurosciences, Institute of Neurology, University College London, London, UK
| | | | - Giovanni Di Pino
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Dario Farina
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK
| | - Etienne Burdet
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK.
| | - Carsten Mehring
- Bernstein Center Freiburg, University of Freiburg, Freiburg im Breisgau, 79104, Germany
- Faculty of Biology, University of Freiburg, Freiburg im Breisgau, 79104, Germany
| |
Collapse
|
3
|
Shafti A, Haar S, Mio R, Guilleminot P, Faisal AA. Playing the piano with a robotic third thumb: assessing constraints of human augmentation. Sci Rep 2021; 11:21375. [PMID: 34725355 PMCID: PMC8560761 DOI: 10.1038/s41598-021-00376-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 10/05/2021] [Indexed: 11/16/2022] Open
Abstract
Contemporary robotics gives us mechatronic capabilities for augmenting human bodies with extra limbs. However, how our motor control capabilities pose limits on such augmentation is an open question. We developed a Supernumerary Robotic 3rd Thumbs (SR3T) with two degrees-of-freedom controlled by the user’s body to endow them with an extra contralateral thumb on the hand. We demonstrate that a pianist can learn to play the piano with 11 fingers within an hour. We then evaluate 6 naïve and 6 experienced piano players in their prior motor coordination and their capability in piano playing with the robotic augmentation. We show that individuals’ augmented performance with the SR3T could be explained by our new custom motor coordination assessment, the Human Augmentation Motor Coordination Assessment (HAMCA) performed pre-augmentation. Our work demonstrates how supernumerary robotics can augment humans in skilled tasks and that individual differences in their augmentation capability are explainable by their individual motor coordination abilities.
Collapse
Affiliation(s)
- Ali Shafti
- Brain and Behaviour Laboratory, Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK.,Department of Computing, Imperial College London, London, SW7 2AZ, UK.,Behaviour Analytics Laboratory, Data Science Institute, London, SW7 2AZ, UK
| | - Shlomi Haar
- Brain and Behaviour Laboratory, Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK.,Behaviour Analytics Laboratory, Data Science Institute, London, SW7 2AZ, UK.,Department of Brain Sciences and UK Dementia Research Institute - Care Research and Technology Centre, Imperial College London, London, W12 0BZ, UK
| | - Renato Mio
- Brain and Behaviour Laboratory, Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Pierre Guilleminot
- Brain and Behaviour Laboratory, Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - A Aldo Faisal
- Brain and Behaviour Laboratory, Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK. .,Department of Computing, Imperial College London, London, SW7 2AZ, UK. .,Behaviour Analytics Laboratory, Data Science Institute, London, SW7 2AZ, UK. .,UKRI CDT in AI for Healthcare, Imperial College London, London, SW7 2AZ, UK. .,MRC London Institute of Medical Sciences, London, W12 0NN, UK.
| |
Collapse
|
4
|
Analysis of the Learning Process through Eye Tracking Technology and Feature Selection Techniques. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11136157] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
In recent decades, the use of technological resources such as the eye tracking methodology is providing cognitive researchers with important tools to better understand the learning process. However, the interpretation of the metrics requires the use of supervised and unsupervised learning techniques. The main goal of this study was to analyse the results obtained with the eye tracking methodology by applying statistical tests and supervised and unsupervised machine learning techniques, and to contrast the effectiveness of each one. The parameters of fixations, saccades, blinks and scan path, and the results in a puzzle task were found. The statistical study concluded that no significant differences were found between participants in solving the crossword puzzle task; significant differences were only detected in the parameters saccade amplitude minimum and saccade velocity minimum. On the other hand, this study, with supervised machine learning techniques, provided possible features for analysis, some of them different from those used in the statistical study. Regarding the clustering techniques, a good fit was found between the algorithms used (k-means ++, fuzzy k-means and DBSCAN). These algorithms provided the learning profile of the participants in three types (students over 50 years old; and students and teachers under 50 years of age). Therefore, the use of both types of data analysis is considered complementary.
Collapse
|
5
|
Brunete A, Gambao E, Hernando M, Cedazo R. Smart Assistive Architecture for the Integration of IoT Devices, Robotic Systems, and Multimodal Interfaces in Healthcare Environments. SENSORS 2021; 21:s21062212. [PMID: 33809884 PMCID: PMC8004200 DOI: 10.3390/s21062212] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 03/11/2021] [Accepted: 03/18/2021] [Indexed: 12/24/2022]
Abstract
This paper presents a new architecture that integrates Internet of Things (IoT) devices, service robots, and users in a smart assistive environment. A new intuitive and multimodal interaction system supporting people with disabilities and bedbound patients is presented. This interaction system allows the user to control service robots and devices inside the room in five different ways: touch control, eye control, gesture control, voice control, and augmented reality control. The interaction system is comprised of an assistive robotic arm holding a tablet PC. The robotic arm can place the tablet PC in front of the user. A demonstration of the developed technology, a prototype of a smart room equipped with home automation devices, and the robotic assistive arm are presented. The results obtained from the use of the various interfaces and technologies are presented in the article. The results include user preference with regard to eye-base control (performing clicks, and using winks or gaze) and the use of mobile phones over augmented reality glasses, among others.
Collapse
Affiliation(s)
- Alberto Brunete
- Centre for Automation and Robotics (CAR UPM-CSIC), Universidad Politécnica de Madrid, 28006 Madrid, Spain; (E.G.); (M.H.)
- Correspondence:
| | - Ernesto Gambao
- Centre for Automation and Robotics (CAR UPM-CSIC), Universidad Politécnica de Madrid, 28006 Madrid, Spain; (E.G.); (M.H.)
| | - Miguel Hernando
- Centre for Automation and Robotics (CAR UPM-CSIC), Universidad Politécnica de Madrid, 28006 Madrid, Spain; (E.G.); (M.H.)
| | - Raquel Cedazo
- Department of Electrical, Electronical and Automatic Control Engineering and Applied Physics, Escuela Técnica Superior de Ingeniería y Diseño Industrial, Universidad Politécnica de Madrid, 28012 Madrid, Spain;
| |
Collapse
|
6
|
Wöhle L, Gebhard M. Towards Robust Robot Control in Cartesian Space Using an Infrastructureless Head- and Eye-Gaze Interface. SENSORS 2021; 21:s21051798. [PMID: 33807599 PMCID: PMC7962065 DOI: 10.3390/s21051798] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 02/28/2021] [Accepted: 03/03/2021] [Indexed: 11/17/2022]
Abstract
This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual simultaneous localization and mapping algorithm (ORB-SLAM 2) for RGB-D cameras with a Magnetic Angular rate Gravity (MARG)-sensor filter. The data fusion process is designed to dynamically switch between magnetic, inertial and visual heading sources to enable robust orientation estimation under various disturbances, e.g., magnetic disturbances or degraded visual sensor data. The interface furthermore delivers accurate eye- and head-gaze vectors to enable precise robot end effector (EFF) positioning and employs a head motion mapping technique to effectively control the robots end effector orientation. An experimental proof of concept demonstrates that the proposed interface and its data fusion process generate reliable and robust pose estimation. The three-dimensional head- and eye-gaze position estimation pipeline delivers a mean Euclidean error of 19.0±15.7 mm for head-gaze and 27.4±21.8 mm for eye-gaze at a distance of 0.3–1.1 m to the user. This indicates that the proposed interface offers a precise control mechanism for hands-free and full six degree of freedom (DoF) robot teleoperation in Cartesian space by head- or eye-gaze and head motion.
Collapse
|
7
|
Tanwear A, Liang X, Liu Y, Vuckovic A, Ghannam R, Bohnert T, Paz E, Freitas PP, Ferreira R, Heidari H. Spintronic Sensors Based on Magnetic Tunnel Junctions for Wireless Eye Movement Gesture Control. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:1299-1310. [PMID: 32991289 DOI: 10.1109/tbcas.2020.3027242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The tracking of eye gesture movements using wearable technologies can undoubtedly improve quality of life for people with mobility and physical impairments by using spintronic sensors based on the tunnel magnetoresistance (TMR) effect in a human-machine interface. Our design involves integrating three TMR sensors on an eyeglass frame for detecting relative movement between the sensor and tiny magnets embedded in an in-house fabricated contact lens. Using TMR sensors with the sensitivity of 11 mV/V/Oe and ten <1 mm3 embedded magnets within a lens, an eye gesture system was implemented with a sampling frequency of up to 28 Hz. Three discrete eye movements were successfully classified when a participant looked up, right or left using a threshold-based classifier. Moreover, our proof-of-concept real-time interaction system was tested on 13 participants, who played a simplified Tetris game using their eye movements. Our results show that all participants were successful in completing the game with an average accuracy of 90.8%.
Collapse
|
8
|
Subramanian M, Songur N, Adjei D, Orlov P, Faisal AA. A.Eye Drive: Gaze-based semi-autonomous wheelchair interface. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5967-5970. [PMID: 31947206 DOI: 10.1109/embc.2019.8856608] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Existing wheelchair control interfaces, such as sip & puff or screen based gaze-controlled cursors, are challenging for the severely disabled to navigate safely and independently as users continuously need to interact with an interface during navigation. This puts a significant cognitive load on users and prevents them from interacting with the environment in other forms during navigation. We have combined eyetracking/gaze-contingent intention decoding with computer vision context-aware algorithms and autonomous navigation drawn from self-driving vehicles to allow paralysed users to drive by eye, simply by decoding natural gaze about where the user wants to go: A.Eye Drive. Our "Zero UI" driving platform allows users to look and interact visually with at an object or destination of interest in their visual scene, and the wheelchair autonomously takes the user to the intended destination, while continuously updating the computed path for static and dynamic obstacles. This intention decoding technology empowers the end-user by promising more independence through their own agency.
Collapse
|