1
|
Mohammadi A, Wang C, Yu T, Tan Y, Choong P, Oetomo D. An Information-Rich and Highly Wearable Soft Sensor System Based on Displacement Myography for Practical Hand Gesture Interfaces. IEEE J Biomed Health Inform 2025; 29:3451-3464. [PMID: 40031176 DOI: 10.1109/jbhi.2025.3533736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Wearable sensors for hand gesture recognition have demonstrated significant potential for creating non-invasive human-machine interfaces. Nonetheless, the trade-off between wearability, practicality and performance constrains their applicability in real-world scenarios. This paper introduces MyoLog, a wearable soft sensor system that utilises forearm muscle deformations for accurate hand gesture recognition. Muscle displacements are captured using an array of magnets and tri-axis magnetometers (displacement myography), integrated into soft and flexible structures that conform to and deform with the shape of forearm muscles. The high signal-to-noise ratio and sensitivity of the sensor modules in MyoLog produce information-rich signals, enabling the detection and differentiation of a wide spectrum of hand gestures. The study used the results of 9 participants performing 44 diverse gestures with MyoLog to investigate its performance in terms of number of gestures and achieved classification accuracy. The average performance achieved by participants was 97.7%, 91.5%, and 89.1% accuracy in executing 13, 22, and 28 gestures, respectively. To demonstrate the capabilities of MyoLog in practical settings, we explored two potential applications in virtual reality training for laparoscopic surgery and prosthetic hand control. The high wearability of MyoLog without compromising the performance paves the way for more practical human-machine interactions in diverse applications.
Collapse
|
2
|
Cao G, Jia S, Wu Q, Xia C. MMG-Based Motion Segmentation and Recognition of Upper Limb Rehabilitation Using the YOLOv5s-SE. SENSORS (BASEL, SWITZERLAND) 2025; 25:2257. [PMID: 40218771 PMCID: PMC11990975 DOI: 10.3390/s25072257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2025] [Revised: 03/30/2025] [Accepted: 04/01/2025] [Indexed: 04/14/2025]
Abstract
Mechanomyography (MMG) is a non-invasive technique for assessing muscle activity by measuring mechanical signals, offering high sensitivity and real-time monitoring capabilities, and it has many applications in rehabilitation training. Traditional MMG-based motion recognition relies on feature extraction and classifier training, which require segmenting continuous actions, leading to challenges in real-time performance and segmentation accuracy. Therefore, this paper proposes an innovative method for the real-time segmentation and classification of upper limb rehabilitation actions based on the You Only Look Once (YOLO) algorithm, integrating the Squeeze-and-Excitation (SE) attention mechanism to enhance the model's performance. In this paper, the collected MMG signals were transformed into one-dimensional time-series images. After image processing, the training set and test set were divided for the training and testing of the YOLOv5s-SE model. The results demonstrated that the proposed model effectively segmented isolated and continuous MMG motions while simultaneously performing real-time motion category prediction and outputting results. In segmentation tasks, the base YOLOv5s model achieved 97.9% precision and 98.0% recall, while the improved YOLOv5s-SE model increased precision to 98.7% (+0.8%) and recall to 98.3% (+0.3%). Additionally, the model demonstrated exceptional accuracy in predicting motion categories, achieving an accuracy of 98.9%. This method realizes the automatic segmentation of time-domain motions, avoids the limitations of manual parameter adjustment in traditional methods, and simultaneously enhances the real-time performance of MMG motion recognition through image processing, providing an effective solution for motion analysis in wearable devices.
Collapse
Affiliation(s)
- Gangsheng Cao
- School of Mechanical and Power Engineering, East China University of Science and Technology, Shanghai 200237, China; (G.C.); (S.J.)
| | - Shen Jia
- School of Mechanical and Power Engineering, East China University of Science and Technology, Shanghai 200237, China; (G.C.); (S.J.)
| | - Qing Wu
- School of Mechanical and Power Engineering, East China University of Science and Technology, Shanghai 200237, China; (G.C.); (S.J.)
| | - Chunming Xia
- School of Mechanical and Power Engineering, East China University of Science and Technology, Shanghai 200237, China; (G.C.); (S.J.)
- School of Mechanical and Automotive Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| |
Collapse
|
3
|
Suo J, Liu Y, Wang J, Chen M, Wang K, Yang X, Yao K, Roy VAL, Yu X, Daoud WA, Liu N, Wang J, Wang Z, Li WJ. AI-Enabled Soft Sensing Array for Simultaneous Detection of Muscle Deformation and Mechanomyography for Metaverse Somatosensory Interaction. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2305025. [PMID: 38376001 DOI: 10.1002/advs.202305025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 10/25/2023] [Indexed: 02/21/2024]
Abstract
Motion recognition (MR)-based somatosensory interaction technology, which interprets user movements as input instructions, presents a natural approach for promoting human-computer interaction, a critical element for advancing metaverse applications. Herein, this work introduces a non-intrusive muscle-sensing wearable device, that in conjunction with machine learning, enables motion-control-based somatosensory interaction with metaverse avatars. To facilitate MR, the proposed device simultaneously detects muscle mechanical activities, including dynamic muscle shape changes and vibrational mechanomyogram signals, utilizing a flexible 16-channel pressure sensor array (weighing ≈0.38 g). Leveraging the rich information from multiple channels, a recognition accuracy of ≈96.06% is achieved by classifying ten lower-limb motions executed by ten human subjects. In addition, this work demonstrates the practical application of muscle-sensing-based somatosensory interaction, using the proposed wearable device, for enabling the real-time control of avatars in a virtual space. This study provides an alternative approach to traditional rigid inertial measurement units and electromyography-based methods for achieving accurate human motion capture, which can further broaden the applications of motion-interactive wearable devices for the coming metaverse age.
Collapse
Affiliation(s)
- Jiao Suo
- Dept. of Mechanical Engineering, City University of Hong Kong, Hong Kong, 999077, China
| | - Yifan Liu
- Dept. of Electrical and Computer Engineering, Michigan State University, MI, 48840, USA
| | - Jianfei Wang
- The Int. Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun, 130022, China
| | - Meng Chen
- Dept. of Mechanical Engineering, City University of Hong Kong, Hong Kong, 999077, China
| | - Keer Wang
- Dept. of Mechanical Engineering, City University of Hong Kong, Hong Kong, 999077, China
| | - Xiaomeng Yang
- Dept. of Mechanical Engineering, City University of Hong Kong, Hong Kong, 999077, China
| | - Kuanming Yao
- Dept. of Biomedical Engineering, City University of Hong Kong, Hong Kong, 999077, China
| | - Vellaisamy A L Roy
- James Watt School of Engineering, University of Glasgow, Scotland, G12 8QQ, UK
| | - Xinge Yu
- Dept. of Biomedical Engineering, City University of Hong Kong, Hong Kong, 999077, China
| | - Walid A Daoud
- Dept. of Mechanical Engineering, City University of Hong Kong, Hong Kong, 999077, China
| | - Na Liu
- Sch. of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China
| | - Jianping Wang
- Dept. of Computer Science, City University of Hong Kong, Hong Kong, 999077, China
| | - Zuobin Wang
- The Int. Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun, 130022, China
| | - Wen Jung Li
- Dept. of Mechanical Engineering, City University of Hong Kong, Hong Kong, 999077, China
| |
Collapse
|
4
|
Mak THA, Liang R, Chim TW, Yip J. A Neural Network Approach for Inertial Measurement Unit-Based Estimation of Three-Dimensional Spinal Curvature. SENSORS (BASEL, SWITZERLAND) 2023; 23:6122. [PMID: 37447971 DOI: 10.3390/s23136122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 06/27/2023] [Accepted: 06/30/2023] [Indexed: 07/15/2023]
Abstract
The spine is an important part of the human body. Thus, its curvature and shape are closely monitored, and treatment is required if abnormalities are detected. However, the current method of spinal examination mostly relies on two-dimensional static imaging, which does not provide real-time information on dynamic spinal behaviour. Therefore, this study explored an easier and more efficient method based on machine learning and sensors to determine the curvature of the spine. Fifteen participants were recruited and performed tests to generate data for training a neural network. This estimated the spinal curvature from the readings of three inertial measurement units and had an average absolute error of 0.261161 cm.
Collapse
Affiliation(s)
- T H Alex Mak
- Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong, China
| | - Ruixin Liang
- Laboratory for Artificial Intelligence in Design, Hong Kong Science Park, New Territories, Hong Kong, China
| | - T W Chim
- Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong, China
| | - Joanne Yip
- School of Fashion and Textiles, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China
| |
Collapse
|
5
|
Kimoto A, Fujiyama H, Machida M. A Wireless Multi-Layered EMG/MMG/NIRS Sensor for Muscular Activity Evaluation. SENSORS (BASEL, SWITZERLAND) 2023; 23:1539. [PMID: 36772579 PMCID: PMC9919115 DOI: 10.3390/s23031539] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/27/2023] [Accepted: 01/28/2023] [Indexed: 06/18/2023]
Abstract
A wireless multi-layered sensor that allows electromyography (EMG), mechanomyography (MMG) and near-infrared spectroscopy (NIRS) measurements to be carried out simultaneously is presented. The multi-layered sensor comprises a thin silver electrode, transparent piezo-film and photosensor. EMG and MMG measurements are performed using the electrode and piezo-film, respectively. NIRS measurements are performed using the photosensor. Muscular activity is then analyzed in detail using the three types of data obtained. In experiments, the EMG, MMG and NIRS signals were measured for isometric ramp contraction at the forearm and cycling exercise of the lateral vastus muscle with stepped increments of the load using the layered sensor. The results showed that it was possible to perform simultaneous EMG, MMG and NIRS measurements at a local position using the proposed sensor. It is suggested that the proposed sensor has the potential to evaluate muscular activity during exercise, although the detection of the anaerobic threshold has not been clearly addressed.
Collapse
|
6
|
Neťuková S, Bejtic M, Malá C, Horáková L, Kutílek P, Kauler J, Krupička R. Lower Limb Exoskeleton Sensors: State-of-the-Art. SENSORS (BASEL, SWITZERLAND) 2022; 22:9091. [PMID: 36501804 PMCID: PMC9738474 DOI: 10.3390/s22239091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 11/08/2022] [Accepted: 11/17/2022] [Indexed: 06/17/2023]
Abstract
Due to the ever-increasing proportion of older people in the total population and the growing awareness of the importance of protecting workers against physical overload during long-time hard work, the idea of supporting exoskeletons progressed from high-tech fiction to almost commercialized products within the last six decades. Sensors, as part of the perception layer, play a crucial role in enhancing the functionality of exoskeletons by providing as accurate real-time data as possible to generate reliable input data for the control layer. The result of the processed sensor data is the information about current limb position, movement intension, and needed support. With the help of this review article, we want to clarify which criteria for sensors used in exoskeletons are important and how standard sensor types, such as kinematic and kinetic sensors, are used in lower limb exoskeletons. We also want to outline the possibilities and limitations of special medical signal sensors detecting, e.g., brain or muscle signals to improve data perception at the human-machine interface. A topic-based literature and product research was done to gain the best possible overview of the newest developments, research results, and products in the field. The paper provides an extensive overview of sensor criteria that need to be considered for the use of sensors in exoskeletons, as well as a collection of sensors and their placement used in current exoskeleton products. Additionally, the article points out several types of sensors detecting physiological or environmental signals that might be beneficial for future exoskeleton developments.
Collapse
|