1
|
Benedicto-Rodríguez G, Bosch F, Juan CG, Bonomini MP, Fernández-Caballero A, Fernandez-Jover E, Ferrández-Vicente JM. Understanding Robot Gesture Perception in Children with Autism Spectrum Disorder during Human-Robot Interaction. Int J Neural Syst 2025:2550026. [PMID: 40231328 DOI: 10.1142/s0129065725500261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2025]
Abstract
Social robots are increasingly being used in therapeutic contexts, especially as a complement in the therapy of children with Autism Spectrum Disorder (ASD). Because of this, the aim of this study is to understand how children with ASD perceive and interpret the gestures made by the robot Pepper versus human instructor, which can also be influenced by verbal communication. This study analyzes the impact of both conditions (verbal and nonverbal communication) and types of gestures (conversational and emotional) on gesture recognition through the study of the accuracy rate and examines the physiological responses of children with the Empatica E4 device. The results reveal that verbal communication is more accessible to children with ASD and neurotypicals (NT), with emotional gestures being more interpretable than conversational gestures. The Pepper robot was found to generate lower responses of emotional arousal compared to the human instructor in both ASD and neurotypical children. This study highlights the potential of robots like Pepper to support the communication skills of children with ASD, especially in structured and predictable nonverbal gestures. However, the findings also point to challenges, such as the need for more reliable robotic communication methods, and highlight the importance of changing interventions tailored to individual needs.
Collapse
Affiliation(s)
| | - Facundo Bosch
- Instituto Tecnológico de Buenos Aires, Buenos Aires, Argentina
| | - Carlos G Juan
- Universidad Politécnica de Cartagena, Murcia 30202, Spain
- Universidad Miguel Hernández de Elche, Instituto de Bioingeniería, Elche (Alicante) 03202, Spain
| | - Maria Paula Bonomini
- Instituto Tecnológico de Buenos Aires, Buenos Aires, Argentina
- Instituto Argentino de Matemática "Alberto P. Calderón" (IAM), CONICET, Buenos Aires, Argentina
| | | | - Eduardo Fernandez-Jover
- CIBER BBN, Universidad Miguel Hernández de Elche, Instituto de Bioingeniería, Elche (Alicante) 03202, Spain
| | - Jose Manuel Ferrández-Vicente
- Universidad Politécnica de Cartagena, Murcia 30202, Spain
- ECTLab[Formula: see text], European University of Technology, Spain
| |
Collapse
|
2
|
Zhao H, Liu Y, Li X, Chen X, Zhang X. Online and Cross-User Finger Movement Pattern Recognition by Decoding Neural Drive Information from Surface Electromyogram. Int J Neural Syst 2025; 35:2550014. [PMID: 39907499 DOI: 10.1142/s0129065725500145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2025]
Abstract
Cross-user variability is a well-known challenge that leads to severe performance degradation and impacts the robustness of practical myoelectric control systems. To address this issue, a novel method for myoelectric recognition of finger movement patterns is proposed by incorporating a neural decoding approach with unsupervised domain adaption (UDA) learning. In our method, the neural decoding approach is implemented by extracting microscopic features characterizing individual motor unit (MU) activities obtained from a two-stage online surface electromyogram (SEMG) decomposition. A specific deep learning model is designed and initially trained using labeled data from a set of existing users. The model can update adaptively when recognizing the movement patterns of a new user. The final movement pattern was determined by a fuzzy weighted decision strategy. SEMG signals were collected from the finger extensor muscles of 15 subjects to detect seven dexterous finger-movement patterns. The proposed method achieved a movement pattern recognition accuracy of ([Formula: see text])% over seven movements under cross-user testing scenarios, much higher than that of the conventional methods using global SEMG features. Our study presents a novel robust myoelectric pattern recognition approach at a fine-grained MU level, with wide applications in neural interface and prosthesis control.
Collapse
Affiliation(s)
- Haowen Zhao
- School of Microelectronics, University of Science and Technology of China, Hefei, Anhui 230002, P. R. China
| | - Yunfei Liu
- School of Microelectronics, University of Science and Technology of China, Hefei, Anhui 230002, P. R. China
| | - Xinhui Li
- School of Computer Science and Technology, Anhui University, Hefei, Anhui 230601, P. R. China
| | - Xiang Chen
- School of Microelectronics, University of Science and Technology of China, Hefei, Anhui 230002, P. R. China
| | - Xu Zhang
- School of Microelectronics, University of Science and Technology of China, Hefei, Anhui 230002, P. R. China
| |
Collapse
|
3
|
Meng L, Hu X. Unsupervised Neural Decoding to Predict Dexterous Multi-Finger Flexion and Extension Forces. IEEE J Biomed Health Inform 2025; 29:1959-1969. [PMID: 40030548 DOI: 10.1109/jbhi.2024.3510525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Accurate control over individual fingers of robotic hands is essential for the progression of human-robot interactions. Accurate prediction of finger forces becomes imperative in this context. The state-of-the-art neural decoders can extract neural signals from surface electromyogram (sEMG) signals. However, these decoders require labeled data for decoder training, which is challenging to obtain in cases such as limb loss and limits decoder generalizability. In our study, we extracted motoneuron firing information by decomposing high-density sEMG signals from both finger flexor and extensor muscles. We assigned each neuron a probability, reflecting its association with the targeted fingers, based on its temporal firing rate distribution. We then employed a probability thresholding and weighting strategy to select and prioritize neurons for finger force predictions. Our results revealed that the unsupervised neural decoder significantly outperformed both the supervised neural decoder and sEMG-amplitude approaches (: 0.74 ± 0.028 vs. 0.70 ± 0.028 vs. 0.63 ± 0.031, root mean square error: 6.74 ± 0.60% vs. 8.41 ± 0.56% vs. 10.33 ± 0.59% of maximum force), thereby offering a promising and practical solution for accurate force controls. Our results also demonstrated high computational efficiency (96.26 ± 24.16 ms), viable for real-time implementations. The outcomes offer an unsupervised decoder with simplified data requirements for decoder training. The decoder boasts enhanced functionality and adaptability in predicting finger flexion and extension forces. In addition, our approach holds promise for broader applications in scenarios where force measurement proves challenging.
Collapse
|
4
|
Jiang X, Ma C, Nazarpour K. Plug-and-play myoelectric control via a self-calibrating random forest common model. J Neural Eng 2025; 22:016029. [PMID: 39847869 DOI: 10.1088/1741-2552/adada0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2024] [Accepted: 01/23/2025] [Indexed: 01/25/2025]
Abstract
Objective. Electromyographic (EMG) signals show large variabilities over time due to factors such as electrode shifting, user behavior variations, etc substantially degrading the performance of myoelectric control models in long-term use. Previously one-time model calibration was usually required each time before usage. However, the EMG characteristics could change even within a short period of time. Our objective is to develop a self-calibrating model, with an automatic and unsupervised self-calibration mechanism.Approach. We developed a computationally efficient random forest (RF) common model, which can (1) be pre-trained and easily adapt to a new user via one-shot calibration, and (2) keep calibrating itself once in a while by boosting the RF with new decision trees trained on pseudo-labels of testing samples in a data buffer.Main results. Our model has been validated in both offline and real-time, both open and closed-loop, both intra-day and long-term (up to 5 weeks) experiments. We tested this approach with data from 66 non-disabled participants. We also explored the effects of bidirectional user-model co-adaption in closed-loop experiments. We found that the self-calibrating model could gradually improve its performance in long-term use. With visual feedback, users will also adapt to the dynamic model meanwhile learn to perform hand gestures with significantly lower EMG amplitudes (less muscle effort).Significance. Our RF-approach provides a new alternative built on simple decision tree for myoelectric control, which is explainable, computationally efficient, and requires minimal data for model calibration. Source codes are avaiable at:https://github.com/MoveR-Digital-Health-and-Care-Hub/self-calibrating-rf.
Collapse
Affiliation(s)
- Xinyu Jiang
- School of Informatics, The University of Edinburgh, Edinburgh, United Kingdom
| | - Chenfei Ma
- School of Informatics, The University of Edinburgh, Edinburgh, United Kingdom
| | - Kianoush Nazarpour
- School of Informatics, The University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
5
|
Chen J, Li W, Gong K, Lu X, Tong MS, Wang X, Yang GM. Gesture-controlled reconfigurable metasurface system based on surface electromyography for real-time electromagnetic wave manipulation. NANOPHOTONICS (BERLIN, GERMANY) 2025; 14:107-119. [PMID: 39840391 PMCID: PMC11744455 DOI: 10.1515/nanoph-2024-0572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2024] [Accepted: 12/09/2024] [Indexed: 01/23/2025]
Abstract
Gesture recognition plays a significant role in human-machine interaction (HMI) system. This paper proposes a gesture-controlled reconfigurable metasurface system based on surface electromyography (sEMG) for real-time beam deflection and polarization conversion. By recognizing the sEMG signals of user gestures through a pre-trained convolutional neural network (CNN) model, the system dynamically modulates the metasurface, enabling precise control of the deflection direction and polarization state of electromagnetic waves. Experimental results demonstrate that the proposed system achieves high-precision electromagnetic wave manipulation, in response to different gestures. This system has significant potential applications in intelligent device control, virtual reality systems, and wireless communication technology, and is expected to contribute to the advancement and innovation of HMI technology by integration of more advanced metasurfaces and sEMG processing technologies.
Collapse
Affiliation(s)
- Junzai Chen
- College of Electronic and Information Engineering, Tongji University, Shanghai200092, China
| | - Weiran Li
- Key Laboratory for Information Science of Electromagnetic Waves, School of Information Science and Technology, Fudan University, Shanghai200433, China
| | - Kailuo Gong
- College of Electronic and Information Engineering, Tongji University, Shanghai200092, China
| | - Xiaojie Lu
- College of Electronic and Information Engineering, Tongji University, Shanghai200092, China
| | - Mei Song Tong
- College of Electronic and Information Engineering, Tongji University, Shanghai200092, China
| | - Xiaoyi Wang
- College of Electronic and Information Engineering, Shanghai Institute of Intelligent Science and Technology, Tongji University, Shanghai200092, China
| | - Guo-Min Yang
- Key Laboratory for Information Science of Electromagnetic Waves, School of Information Science and Technology, Fudan University, Shanghai200433, China
| |
Collapse
|
6
|
Ma C, Neri F, Gu L, Wang Z, Wang J, Qing A, Wang Y. Crowd Counting Using Meta-Test-Time Adaptation. Int J Neural Syst 2024; 34:2450061. [PMID: 39252679 DOI: 10.1142/s0129065724500618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Machine learning algorithms are commonly used for quickly and efficiently counting people from a crowd. Test-time adaptation methods for crowd counting adjust model parameters and employ additional data augmentation to better adapt the model to the specific conditions encountered during testing. The majority of current studies concentrate on unsupervised domain adaptation. These approaches commonly perform hundreds of epochs of training iterations, requiring a sizable number of unannotated data of every new target domain apart from annotated data of the source domain. Unlike these methods, we propose a meta-test-time adaptive crowd counting approach called CrowdTTA, which integrates the concept of test-time adaptation into the meta-learning framework and makes it easier for the counting model to adapt to the unknown test distributions. To facilitate the reliable supervision signal at the pixel level, we introduce uncertainty by inserting the dropout layer into the counting model. The uncertainty is then used to generate valuable pseudo labels, serving as effective supervisory signals for adapting the model. In the context of meta-learning, one image can be regarded as one task for crowd counting. In each iteration, our approach is a dual-level optimization process. In the inner update, we employ a self-supervised consistency loss function to optimize the model so as to simulate the parameters update process that occurs during the test phase. In the outer update, we authentically update the parameters based on the image with ground truth, improving the model's performance and making the pseudo labels more accurate in the next iteration. At test time, the input image is used for adapting the model before testing the image. In comparison to various supervised learning and domain adaptation methods, our results via extensive experiments on diverse datasets showcase the general adaptive capability of our approach across datasets with varying crowd densities and scales.
Collapse
Affiliation(s)
- Chaoqun Ma
- School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, P. R. China
| | - Ferrante Neri
- NICE Group, School of Computer Science and Electronic Engineering, University of Surrey, Guildford, Surrey GU2 7XH, UK
| | - Li Gu
- Department of Computer Science and Software Engineering, Concordia University, Montreal, QC H3H 2L9, Canada
| | - Ziqiang Wang
- Department of Computer Science and Software Engineering, Concordia University, Montreal, QC H3H 2L9, Canada
| | - Jian Wang
- Faculty of Electric Power Engineering, Kunming University of Science and Technology, Kunming 650500, P. R. China
| | - Anyong Qing
- School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, P. R. China
| | - Yang Wang
- Department of Computer Science and Software Engineering, Concordia University, Montreal, QC H3H 2L9, Canada
| |
Collapse
|
7
|
Meng L, Hu X. Unsupervised neural decoding for concurrent and continuous multi-finger force prediction. Comput Biol Med 2024; 173:108384. [PMID: 38554657 DOI: 10.1016/j.compbiomed.2024.108384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Revised: 02/27/2024] [Accepted: 03/24/2024] [Indexed: 04/02/2024]
Abstract
Reliable prediction of multi-finger forces is crucial for neural-machine interfaces. Various neural decoding methods have progressed substantially for accurate motor output predictions. However, most neural decoding methods are performed in a supervised manner, i.e., the finger forces are needed for model training, which may not be suitable in certain contexts, especially in scenarios involving individuals with an arm amputation. To address this issue, we developed an unsupervised neural decoding approach to predict multi-finger forces using spinal motoneuron firing information. We acquired high-density surface electromyogram (sEMG) signals of the finger extensor muscle when subjects performed single-finger and multi-finger tasks of isometric extensions. We first extracted motor units (MUs) from sEMG signals of the single-finger tasks. Because of inevitable finger muscle co-activation, MUs controlling the non-targeted fingers can also be recruited. To ensure an accurate finger force prediction, these MUs need to be teased out. To this end, we clustered the decomposed MUs based on inter-MU distances measured by the dynamic time warping technique, and we then labeled the MUs using the mean firing rate or the firing rate phase amplitude. We merged the clustered MUs related to the same target finger and assigned weights based on the consistency of the MUs being retained. As a result, compared with the supervised neural decoding approach and the conventional sEMG amplitude approach, our new approach can achieve a higher R2 (0.77 ± 0.036 vs. 0.71 ± 0.11 vs. 0.61 ± 0.09) and a lower root mean square error (5.16 ± 0.58 %MVC vs. 5.88 ± 1.34 %MVC vs. 7.56 ± 1.60 %MVC). Our findings can pave the way for the development of accurate and robust neural-machine interfaces, which can significantly enhance the experience during human-robotic hand interactions in diverse contexts.
Collapse
Affiliation(s)
- Long Meng
- Department of Mechanical Engineering, Pennsylvania State University-University Park, PA, USA
| | - Xiaogang Hu
- Department of Mechanical Engineering, Pennsylvania State University-University Park, PA, USA; Department of Kinesiology, Pennsylvania State University-University Park, PA, USA; Department of Physical Medicine & Rehabilitation, Pennsylvania State Hershey College of Medicine, PA, USA; Huck Institutes of the Life Sciences, Pennsylvania State University-University Park, PA, USA; Center for Neural Engineering, Pennsylvania State University-University Park, PA, USA.
| |
Collapse
|