1
|
Ball M, Fuller P, Cha JS. Identification of surgical human-robot interactions and measures during robotic-assisted surgery: A scoping review. APPLIED ERGONOMICS 2025; 125:104478. [PMID: 39983252 DOI: 10.1016/j.apergo.2025.104478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 01/22/2025] [Accepted: 02/06/2025] [Indexed: 02/23/2025]
Abstract
This study aims to identify the dynamics of robotic-assisted surgery (RAS) teams and their metrics. A scoping review across seven science, engineering, and clinical databases was conducted. It was found that literature focuses on skills and interactions centralized around the surgeon and technical components of the robotic system; however, limited literature exists on skill proceduralization specific for other surgical team members performing robotic-assisted surgery procedures. A framework that identifies the individuals (i.e., surgeon, surgical team members, and robotic platform), with their respective skill requirements (technical and nontechnical), and the required interactions among the team and RAS systems was developed. Future research in RAS human-robot interaction can address the need to understand changing dynamics and skills required by the surgical team with the continuing evolution and adoption of surgical robot technology.
Collapse
Affiliation(s)
- Matthew Ball
- Department of Industrial Engineering, Clemson University, 211 Fernow St., Clemson, SC 29634, USA
| | - Patrick Fuller
- Department of Industrial Engineering, Clemson University, 211 Fernow St., Clemson, SC 29634, USA
| | - Jackie S Cha
- Department of Industrial Engineering, Clemson University, 211 Fernow St., Clemson, SC 29634, USA.
| |
Collapse
|
2
|
Cavicchi S, Abubshait A, Siri G, Mustile M, Ciardo F. Can humanoid robots be used as a cognitive offloading tool? Cogn Res Princ Implic 2025; 10:17. [PMID: 40244346 PMCID: PMC12006637 DOI: 10.1186/s41235-025-00616-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 02/01/2025] [Indexed: 04/18/2025] Open
Abstract
Cognitive load occurs when the demands of a task surpass the available processing capacity, straining mental resources and potentially impairing performance efficiency, such as increasing the number of errors in a task. Owing to its ubiquity in real-world scenarios, the existence of offloading strategies to reduce cognitive load is not new to experts and nonexperts, and many of these strategies involve technology (e.g., using Calendar Apps to remember scheduled events). Surprisingly, little is known about the potential use of humanoid robots for cognitive offloading. We will examine studies assessing the influence of humanoid robots on cognitive tasks requiring the resolution of cognitive conflict to determine whether their presence facilitates or hinders cognitive performance. Our analysis focuses on standardized cognitive conflict paradigms, as these effectively simulate real-life conflict scenarios (i.e., everyday challenges in focusing on the task and ignoring distractions). In these studies, robots were involved by either participating in the tasks, providing social cues, or observing human performance. By identifying contexts where humanoid robots support cognitive offloading and where they may undermine it, this work contributes to a deeper understanding of cognitive processes in human-robot interaction (HRI) and informs the design of interventions aimed at improving task performance and well-being in professional HRI settings.
Collapse
Affiliation(s)
- Shari Cavicchi
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy.
| | - Abdulaziz Abubshait
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy
| | - Giulia Siri
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara, Italy
| | - Magda Mustile
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy
- The Psychological Sciences Research Institute, University of Louvain, Louvain-La-Neuve, Belgium
| | - Francesca Ciardo
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy.
- Department of Psychology, University of Milano-Bicocca, Milan, Italy.
| |
Collapse
|
3
|
Łukasik A, Gut A. From robots to chatbots: unveiling the dynamics of human-AI interaction. Front Psychol 2025; 16:1569277. [PMID: 40271364 PMCID: PMC12014614 DOI: 10.3389/fpsyg.2025.1569277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2025] [Accepted: 03/27/2025] [Indexed: 04/25/2025] Open
Abstract
The rapid integration of artificial agents-robots, avatars, and chatbots-into human social life necessitates a deeper understanding of human-AI interactions and their impact on social interaction. Artificial agents have become integral across various domains, including healthcare, education, and entertainment, offering enhanced efficiency, personalization, and emotional connectivity. However, their effectiveness in providing successful social interaction is influenced by various factors that impact both their reception and human responses during interaction. The present article explores how different forms of these agents influence processes essential for social interaction, such as attributing mental states and intentions and shaping emotions. The goal of this paper is to analyze the roles that artificial agents can and cannot assume in social environments, the stances humans adopt toward them, and the dynamics of human-artificial agent interactions. Key factors associated with the artificial agent's design such as physical appearance, adaptability to human behavior, user beliefs and knowledge, transparency of social cues, and the uncanny valley phenomenon have been selected as factors that significant influence social interaction in AI contexts.
Collapse
Affiliation(s)
- Albert Łukasik
- Department of Cognitive Science, Doctoral School of Social Sciences, Nicolaus Copernicus University in Toruń, Toruń, Poland
| | - Arkadiusz Gut
- Department of Cognitive Science, Faculty of Philosophy and Social Sciences, Nicolaus Copernicus University in Toruń, Toruń, Poland
| |
Collapse
|
4
|
Schreiter J, Heinrich F, Hatscher B, Schott D, Hansen C. Multimodal human-computer interaction in interventional radiology and surgery: a systematic literature review. Int J Comput Assist Radiol Surg 2025; 20:807-816. [PMID: 39467893 PMCID: PMC12034581 DOI: 10.1007/s11548-024-03263-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Accepted: 08/23/2024] [Indexed: 10/30/2024]
Abstract
PURPOSE As technology advances, more research dedicated to medical interactive systems emphasizes the integration of touchless and multimodal interaction (MMI). Particularly in surgical and interventional settings, this approach is advantageous because it maintains sterility and promotes a natural interaction. Past reviews have focused on investigating MMI in terms of technology and interaction with robots. However, none has put particular emphasis on analyzing these kind of interactions for surgical and interventional scenarios. METHODS Two databases were included in the query to search for relevant publications within the past 10 years. After identification, two screening steps followed which included eligibility criteria. A forward/backward search was added to identify more relevant publications. The analysis incorporated the clustering of references in terms of addressed medical field, input and output modalities, and challenges regarding the development and evaluation. RESULTS A sample of 31 references was obtained (16 journal articles, 15 conference papers). MMI was predominantly developed for laparoscopy and radiology and interaction with image viewers. The majority implemented two input modalities, with voice-hand interaction being the most common combination-voice for discrete and hand for continuous navigation tasks. The application of gaze, body, and facial control is minimal, primarily because of ergonomic concerns. Feedback was included in 81% publications, of which visual cues were most often applied. CONCLUSION This work systematically reviews MMI for surgical and interventional scenarios over the past decade. In future research endeavors, we propose an enhanced focus on conducting in-depth analyses of the considered use cases and the application of standardized evaluation methods. Moreover, insights from various sectors, including but not limited to the gaming sector, should be exploited.
Collapse
Affiliation(s)
- Josefine Schreiter
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Florian Heinrich
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Benjamin Hatscher
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
- Siemens Healthineers, Forchheim, Germany
| | - Danny Schott
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Christian Hansen
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany.
| |
Collapse
|
5
|
Yan Y, Li J, Yin M. EEG-based recognition of hand movement and its parameter. J Neural Eng 2025; 22:026006. [PMID: 40009879 DOI: 10.1088/1741-2552/adba8a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2024] [Accepted: 02/26/2025] [Indexed: 02/28/2025]
Abstract
Objecitve. Brain-computer interface is a cutting-edge technology that enables interaction with external devices by decoding human intentions, and is highly valuable in the fields of medical rehabilitation and human-robot collaboration. The technique of decoding motor intent for motor execution (ME) based on electroencephalographic (EEG) signals is in the feasibility study stage by now. There are still insufficient studies on the accuracy of ME EEG signal recognition in between-subjects classification to reach the level of realistic applications. This paper aims to investigate EEG signal-based hand movement recognition by analyzing low-frequency time-domain information.Approach. Experiments with four types of hand movements, two force parameter (picking up and pushing) tasks, and a four-target directional displacement task were designed and executed, and the EEG data from thirteen healthy volunteers was collected. Sliding window approach is used to expand the dataset in order to address the issue of EEG signal overfitting. Furtherly, Convolutional Neural Network (CNN)-Bidirectional Long Short-Term Memory Network (BiLSTM) model, an end-to-end serial combination of a BiLSTM and (CNN) is constructed to classify and recognize the hand movement based on the raw EEG data.Main results. According to the experimental results, the model is able to categorize four types of hand movements, picking up movements, pushing movements, and four target direction displacement movements with an accuracy of 99.14% ± 0.49%, 99.29% ± 0.11%, 99.23% ± 0.60%, and 98.11% ± 0.23%, respectively.Significance. Furthermore, comparative tests conducted with alternative deep learning models (LSTM, CNN, EEGNet, CNN-LSTM) demonstrates that the CNN-BiLSTM model is with practicable accuracy in terms of EEG-based hand movement recognition and its parameter decoding.
Collapse
Affiliation(s)
- Yuxuan Yan
- School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 15000, People's Republic of China
| | - Jianguang Li
- School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 15000, People's Republic of China
| | - Mingyue Yin
- School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 15000, People's Republic of China
| |
Collapse
|
6
|
Nascimento JM, Taira C, Becman EC, Forner-Cordero A. Neuromusculoskeletal Control for Simulated Precision Task versus Experimental Data in Trajectory Deviation Analysis. Biomimetics (Basel) 2025; 10:138. [PMID: 40136792 PMCID: PMC11939874 DOI: 10.3390/biomimetics10030138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2024] [Revised: 01/22/2025] [Accepted: 02/08/2025] [Indexed: 03/27/2025] Open
Abstract
Control remains a challenge in precision applications in robotics, particularly when combined with execution in small time intervals. This study employed a two-degree-of-freedom (2-DoF) planar robotic arm driven by a detailed human musculoskeletal model for actuation, incorporating nonlinear control techniques to execute a precision task through simulation. Then, we compared these simulations with real experimental data from healthy subjects performing the same task. Our results show that the Feedback Linearization Control (FLC) applied performed satisfactorily within the task execution constraints compared to a robust nonlinear control technique, i.e., Sliding Mode Control (SMC). On the other hand, differences can be observed between the behavior of the simulated model and the real experimental data, where discrepancies in terms of errors were found. The model errors increased with the amplitude and remained unchanged with any increase in the task execution frequency. However, in human trials, the errors increased both with the amplitude and, notably, with a drastic rise in frequency.
Collapse
Affiliation(s)
- Jean Mendes Nascimento
- Biomechatronics Laboratory, Escola Politécnica, University of Sao Paulo, São Carlos 13566-590, SP, Brazil (A.F.-C.)
| | | | | | | |
Collapse
|
7
|
Santana JM, Silveira BD, Lima C, Diaz-Amado J, Libarino CS, Marques JES, Barrios-Aranibar D, Patiño-Escarcina RE. Design and Implementation of an Interactive System for Service Robot Control and Monitoring. SENSORS (BASEL, SWITZERLAND) 2025; 25:987. [PMID: 40006220 PMCID: PMC11859232 DOI: 10.3390/s25040987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2024] [Revised: 01/22/2025] [Accepted: 01/27/2025] [Indexed: 02/27/2025]
Abstract
This project aims to develop an interactive control system for an autonomous service robot using an ROS (robot operating system). The system integrates an intuitive web interface and an interactive chatbot supported by Google Gemini to enhance the control experience and personalization for the user. The methodology includes the integration of an API (application programming interface) to access a database storing user preferences, such as speed and frequent destinations. Furthermore, the system employs facial recognition, people groups' recognition, and adaptive responses from the chatbot for autonomous navigation, ensuring a service tailored to the individual needs of each user. To validate the proposal, it was implemented on an autonomous service robot, integrated into a motorized wheelchair. Tests demonstrated that the system effectively adjusts the wheelchair's behavior to user preferences, resulting in safer and more personalized navigation. The use of facial recognition and chatbot interaction provided more intuitive and efficient control. The developed system significantly improves the autonomy and quality of life for wheelchair users, proving to be a viable and efficient solution for autonomous and personalized control. The results indicate that integrating technologies like ROS, intuitive web interfaces, and interactive chatbots can transform the user experience of autonomous wheelchairs, better meeting the specific needs of users.
Collapse
Affiliation(s)
- Jonas Machado Santana
- GIPAR Research Group, Instituto Federal da Bahia, IFBA, Vitória da Conquista, Bahia 45078-900, Brazil; (J.M.S.); (B.D.S.); (C.L.); (C.S.L.); (J.E.S.M.)
| | - Bruno Duarte Silveira
- GIPAR Research Group, Instituto Federal da Bahia, IFBA, Vitória da Conquista, Bahia 45078-900, Brazil; (J.M.S.); (B.D.S.); (C.L.); (C.S.L.); (J.E.S.M.)
| | - Crescencio Lima
- GIPAR Research Group, Instituto Federal da Bahia, IFBA, Vitória da Conquista, Bahia 45078-900, Brazil; (J.M.S.); (B.D.S.); (C.L.); (C.S.L.); (J.E.S.M.)
| | - Jose Diaz-Amado
- GIPAR Research Group, Instituto Federal da Bahia, IFBA, Vitória da Conquista, Bahia 45078-900, Brazil; (J.M.S.); (B.D.S.); (C.L.); (C.S.L.); (J.E.S.M.)
- Electrical and Electronic Engineering Department Universidad Católica San Pablo, UCSP, Arequipa 04001, Peru; (D.B.-A.); (R.E.P.-E.)
| | - Cléia Santos Libarino
- GIPAR Research Group, Instituto Federal da Bahia, IFBA, Vitória da Conquista, Bahia 45078-900, Brazil; (J.M.S.); (B.D.S.); (C.L.); (C.S.L.); (J.E.S.M.)
| | - Joao E. Soares Marques
- GIPAR Research Group, Instituto Federal da Bahia, IFBA, Vitória da Conquista, Bahia 45078-900, Brazil; (J.M.S.); (B.D.S.); (C.L.); (C.S.L.); (J.E.S.M.)
| | - Dennis Barrios-Aranibar
- Electrical and Electronic Engineering Department Universidad Católica San Pablo, UCSP, Arequipa 04001, Peru; (D.B.-A.); (R.E.P.-E.)
| | - Raquel E. Patiño-Escarcina
- Electrical and Electronic Engineering Department Universidad Católica San Pablo, UCSP, Arequipa 04001, Peru; (D.B.-A.); (R.E.P.-E.)
| |
Collapse
|
8
|
Akiyama T, Paulo L Blaquera A, Anne Christine Bollos L, P Soriano G, Ito H, Tanioka R, Umehara H, Osaka K, Tanioka T. Reliability of Emotion Analysis from Human Facial Expressions Using Multi-task Cascaded Convolutional Neural Networks. THE JOURNAL OF MEDICAL INVESTIGATION 2025; 72:93-101. [PMID: 40268462 DOI: 10.2152/jmi.72.93] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2025]
Abstract
Life support robots in care settings must be able to read a person's emotions from facial expressions to achieve empathic communication. This study aims to determine the degree of agreement between Multi-task Cascaded Convolutional Neural Networks (MTCNN) results and human subjective emotion analysis as a function to be installed in this type of robot. Forty university students talked with PALRO robot for 10 minutes. Thirteen area of interest videos were used to assess the validity identified by MTCNN was facial expression was happy or combination of happy and other emotions. Twenty university students and 20 medical professionals identified which of the 7 emotions (angry, disgust, fear, happy, sad, surprise, neutral) were present. Fleiss' kappa coefficient was calculated. Kappa coefficients of the emotion analysis for seven emotions ranged from 0.21 to 0.28. Kappa coefficient for "Happy" was the highest (0.52 to 0.57) with moderate agreement. Among female university students, only "Surprise" had a moderate agreement with Fleiss' kappa coefficient of 0.48. MTCNN emotion analysis and human emotion analysis were in moderate agreement for the identification of "Happy" emotions. The comparison of the agreement between the results of emotion analysis from facial expressions using non-contact MTCNN and subjective human facial expression analysis suggested that the use of MTCNN may be effective in understanding subjects' happy feelings. J. Med. Invest. 72 : 93-101, February, 2025.
Collapse
Affiliation(s)
- Toshiya Akiyama
- PhD Student, Graduate School of Health Sciences, Tokushima University, Tokushima, Japan
| | - Allan Paulo L Blaquera
- PhD Student, Graduate School of Health Sciences, Tokushima University, Tokushima, Japan
- School of Nursing and Allied Health Sciences, St. Paul University Philippines, Tuguegarao City, Philip-pines
| | | | - Gil P Soriano
- Department of Nursing, College of Allied Health, National University, Manila, Philippines
| | - Hirokazu Ito
- Associate Professor, Department of Nursing Art and Sciences, Tokushima University, Tokushima, Japan
| | - Ryuichi Tanioka
- Lecturer, Department of Physical Therapy, Hiroshima Cosmopolitan University, Hiroshima, Japan
| | - Hidehiro Umehara
- Department of Psychiatry, Graduate School of Biomedical Sciences, Tokushima University, Tokushima, Japan
| | - Kyoko Osaka
- Professor, Department of Psychiatric Nursing, Nursing Course of Kochi Medical School, Kochi University, Kochi, Japan
| | - Tetsuya Tanioka
- Professor, Department of Nursing Outcomes Management, Institute of Biomedical Sciences, Tokushima University, Tokushima, Japan
| |
Collapse
|
9
|
Müller P, Jahn P. Cocreative Development of Robotic Interaction Systems for Health Care: Scoping Review. JMIR Hum Factors 2024; 11:e58046. [PMID: 39264334 PMCID: PMC11412089 DOI: 10.2196/58046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 06/03/2024] [Accepted: 07/17/2024] [Indexed: 09/13/2024] Open
Abstract
Background Robotic technologies present challenges to health care professionals and are therefore rarely used. Barriers such as lack of controllability and adaptability and complex control functions affect the human-robot relationship. In addition to educational opportunities, the possibility of individual adaptation can improve the usability and practical implementation of robotics. Previous work has focused on developments from a technology-centered perspective and has included user interests too late in the process. Objective This study addresses the following research question: What cocreative research approaches are used in the field of nursing robotics to improve the usability, intended use, and goal-directed application of robotic developments for nurses and to support the nursing process? Methods This scoping review provides an overview of the topic and the research activities taking place within it. Five databases and the reference lists of the identified publications were searched for studies without further restrictions. Studies were included if they developed and evaluated interaction and control platforms for robotic systems in health care in a cocreative way with end users. Results The search resulted in 419 hits, of which 3 publications were included. All publications were feasibility or user studies that were mainly carried out in the European Union. The 3 interaction and control platforms presented were all prototypes and not commercially available. In addition to those in need of care, all studies also included family carers and health care professionals. Conclusions Robotic interaction and control platforms in health care are rarely, if ever, developed and evaluated with feasibility or user studies that include prototypes and end users. While the involvement of end users is crucial, this review emphasizes that all stakeholders, including health care professionals, should participate in the development process to ensure a holistic understanding of application needs and a focus on user experiences and practical health care needs. It is emphasized that the active involvement of end users in the development process is critical to effectively meeting the needs of the target group.
Collapse
Affiliation(s)
- Pascal Müller
- Health Service Research Working Group | Acute Care, Department of Internal Medicine, Faculty of Medicine, University Medicine Halle (Saale), Martin-Luther-University Halle-Wittenberg, Magdeburger Straße 12, Halle (Saale), 06112, Germany, 49 3455574001
| | - Patrick Jahn
- Health Service Research Working Group | Acute Care, Department of Internal Medicine, Faculty of Medicine, University Medicine Halle (Saale), Martin-Luther-University Halle-Wittenberg, Magdeburger Straße 12, Halle (Saale), 06112, Germany, 49 3455574001
| |
Collapse
|
10
|
Wang L, Liu G. Research on multi-robot collaborative operation in logistics and warehousing using A3C optimized YOLOv5-PPO model. Front Neurorobot 2024; 17:1329589. [PMID: 38322650 PMCID: PMC10844514 DOI: 10.3389/fnbot.2023.1329589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/27/2023] [Indexed: 02/08/2024] Open
Abstract
Introduction In the field of logistics warehousing robots, collaborative operation and coordinated control have always been challenging issues. Although deep learning and reinforcement learning methods have made some progress in solving these problems, however, current research still has shortcomings. In particular, research on adaptive sensing and real-time decision-making of multi-robot swarms has not yet received sufficient attention. Methods To fill this research gap, we propose a YOLOv5-PPO model based on A3C optimization. This model cleverly combines the target detection capabilities of YOLOv5 and the PPO reinforcement learning algorithm, aiming to improve the efficiency and accuracy of collaborative operations among logistics and warehousing robot groups. Results Through extensive experimental evaluation on multiple datasets and tasks, the results show that in different scenarios, our model can successfully achieve multi-robot collaborative operation, significantly improve task completion efficiency, and maintain target detection and environment High accuracy of understanding. Discussion In addition, our model shows excellent robustness and adaptability and can adapt to dynamic changes in the environment and fluctuations in demand, providing an effective method to solve the collaborative operation problem of logistics warehousing robots.
Collapse
Affiliation(s)
- Lei Wang
- School of Economy and Management, Hanjiang Normal University, Shiyan, Hubei, China
| | - Guangjun Liu
- School of Business, Wuchang University of Technology, Wuhan, Hubei, China
| |
Collapse
|
11
|
Gong T, Chen D, Wang G, Zhang W, Zhang J, Ouyang Z, Zhang F, Sun R, Ji JC, Chen W. Multimodal fusion and human-robot interaction control of an intelligent robot. Front Bioeng Biotechnol 2024; 11:1310247. [PMID: 38239918 PMCID: PMC10794586 DOI: 10.3389/fbioe.2023.1310247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 12/13/2023] [Indexed: 01/22/2024] Open
Abstract
Introduction: Small-scaled robotic walkers play an increasingly important role in Activity of Daily Living (ADL) assistance in the face of ever-increasing rehab requirements and existing equipment drawbacks. This paper proposes a Rehabilitation Robotic Walker (RRW) for walking assistance and body weight support (BWS) during gait rehabilitation. Methods: The walker provides the patients with weight offloading and guiding force to mimic a series of the physiotherapist's (PT's) movements, and creates a natural, comfortable, and safe environment. This system consists of an omnidirectional mobile platform, a BWS mechanism, and a pelvic brace to smooth the motions of the pelvis. To recognize the human intentions, four force sensors, two joysticks, and one depth-sensing camera were used to monitor the human-machine information, and a multimodal fusion algorithm for intention recognition was proposed to improve the accuracy. Then the system obtained the heading angle E, the pelvic pose F, and the motion vector H via the camera, the force sensors, and the joysticks respectively, classified the intentions with feature extraction and information fusion, and finally outputted the motor speed control through the robot's kinematics. Results: To validate the validity of the algorithm above, a preliminary test with three volunteers was conducted to study the motion control. The results showed that the average error of the integral square error (ISE) was 2.90 and the minimum error was 1.96. Discussion: The results demonstrated the efficiency of the proposed method, and that the system is capable of providing walking assistance.
Collapse
Affiliation(s)
- Tao Gong
- Institute of Intelligent Manufacturing, Shenzhen Polytechnic University, Shenzhen, China
| | - Dan Chen
- Institute of Intelligent Manufacturing, Shenzhen Polytechnic University, Shenzhen, China
| | - Guangping Wang
- AVIC Changhe Aircraft Industry (Group) Corporation Ltd., Jingdezhen, China
| | - Weicai Zhang
- AVIC Changhe Aircraft Industry (Group) Corporation Ltd., Jingdezhen, China
| | - Junqi Zhang
- AVIC Changhe Aircraft Industry (Group) Corporation Ltd., Jingdezhen, China
| | - Zhongchuan Ouyang
- AVIC Changhe Aircraft Industry (Group) Corporation Ltd., Jingdezhen, China
| | - Fan Zhang
- AVIC Changhe Aircraft Industry (Group) Corporation Ltd., Jingdezhen, China
| | - Ruifeng Sun
- AVIC Changhe Aircraft Industry (Group) Corporation Ltd., Jingdezhen, China
| | - Jiancheng Charles Ji
- Institute of Intelligent Manufacturing, Shenzhen Polytechnic University, Shenzhen, China
| | - Wei Chen
- Institute of Intelligent Manufacturing, Shenzhen Polytechnic University, Shenzhen, China
| |
Collapse
|