1
|
Saddaf Khan N, Qadir S, Anjum G, Uddin N. StresSense: Real-Time detection of stress-displaying behaviors. Int J Med Inform 2024; 185:105401. [PMID: 38493546 DOI: 10.1016/j.ijmedinf.2024.105401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 02/29/2024] [Accepted: 03/02/2024] [Indexed: 03/19/2024]
Abstract
BACKGROUND Wrist-worn gadgets like smartphones are ideal for unobtrusively gathering user data, in various fields such as health and fitness monitoring, communication, and productivity enhancement. They seamlessly integrate into users' daily lives, providing valuable insights and features without the need for constant attention or disruption. In sensitive domains like mental health, these devices provide user-friendly, privacy-protected means of diagnosis and treatment, offering a secure and cost-effective avenue for seeking help. OBJECTIVES This study addresses the limitations of traditional mental health assessment techniques, such as intrusive sensing and subjective self-reporting, by harnessing the unobtrusive data collection capabilities of smartphones. Equipped with accelerometers and other sensors, these devices offer a novel approach to mental health research. Our objective was to develop methods for real-time detection of stress and boredom behavior markers using smart devices and machine learning algorithms. METHODOLOGY By leveraging data from accelerometers (A), gyroscopes (G), and magnetometers (M), we compiled a dataset indicative of stress-related behaviors and trained various machine-learning models for predictive accuracy. The methodology involved collecting data from motion sensors (A, G, and M) on the dominant arm's wrist-worn smartphone, followed by data preprocessing, transformation from time series format, and training a Deep Neural Network (DNN) model for activity recognition. FINDINGS Remarkably, the DNN achieved an accuracy of 93.50% on test data, outperforming traditional and ensemble machine learning methods across different window sizes, and demonstrated real-time accuracy of 77.78%, validating its practical application. CONCLUSION In conclusion, this research presents a novel dataset for detecting stress and boredom behaviors using smartphones, reducing reliance on costly devices and offering a more objective assessment. It also proposes a DNN-based method for wrist-worn devices to accurately identify complex activities associated with stress and boredom, with benefits in terms of privacy and user convenience. This advancement represents a significant contribution to the field of mental health research, providing a less intrusive and more user-friendly approach to monitoring mental well-being.
Collapse
Affiliation(s)
- Nida Saddaf Khan
- CITRIC Health Data Science Centre, Medical College, Agha Khan University, Stadium Road, P.O. Box 3500, Karachi 74800, Pakistan; Telecommunication Research Lab (TRL), School of Mathematics and Computer Science, Institute of Business Administration, Karachi, Pakistan.
| | - Saleeta Qadir
- National High-Performance Computing Center, Friedrich-Alexander-Universität, Erlangen-Nürnberg, Schloßplatz 4, 91054 Erlangen, Germany; Telecommunication Research Lab (TRL), School of Mathematics and Computer Science, Institute of Business Administration, Karachi, Pakistan.
| | - Gulnaz Anjum
- Department of Psychology, University of Oslo, Forskningsveien 3A, Harald Schjelderups hus, 0373 Oslo, Norway.
| | - Nasir Uddin
- School of Computer Science, National University of Computer and Emerging Sciences, Karachi Campus, Pakistan.
| |
Collapse
|
2
|
Nematallah H, Rajan S. Quantitative Analysis of Mother Wavelet Function Selection for Wearable Sensors-Based Human Activity Recognition. Sensors (Basel) 2024; 24:2119. [PMID: 38610331 PMCID: PMC11014000 DOI: 10.3390/s24072119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/15/2024] [Accepted: 03/21/2024] [Indexed: 04/14/2024]
Abstract
Recent advancements in the Internet of Things (IoT) wearable devices such as wearable inertial sensors have increased the demand for precise human activity recognition (HAR) with minimal computational resources. The wavelet transform, which offers excellent time-frequency localization characteristics, is well suited for HAR recognition systems. Selecting a mother wavelet function in wavelet analysis is critical, as optimal selection improves the recognition performance. The activity time signals data have different periodic patterns that can discriminate activities from each other. Therefore, selecting a mother wavelet function that closely resembles the shape of the recognized activity's sensor (inertial) signals significantly impacts recognition performance. This study uses an optimal mother wavelet selection method that combines wavelet packet transform with the energy-to-Shannon-entropy ratio and two classification algorithms: decision tree (DT) and support vector machines (SVM). We examined six different mother wavelet families with different numbers of vanishing points. Our experiments were performed on eight publicly available ADL datasets: MHEALTH, WISDM Activity Prediction, HARTH, HARsense, DaLiAc, PAMAP2, REALDISP, and HAR70+. The analysis demonstrated in this paper can be used as a guideline for optimal mother wavelet selection for human activity recognition.
Collapse
Affiliation(s)
- Heba Nematallah
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada
| | - Sreeraman Rajan
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada
| |
Collapse
|
3
|
Zeng F, Guo M, Tan L, Guo F, Liu X. Wearable Sensor-Based Residual Multifeature Fusion Shrinkage Networks for Human Activity Recognition. Sensors (Basel) 2024; 24:758. [PMID: 38339474 PMCID: PMC10857031 DOI: 10.3390/s24030758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 01/20/2024] [Accepted: 01/22/2024] [Indexed: 02/12/2024]
Abstract
Human activity recognition (HAR) based on wearable sensors has emerged as a low-cost key-enabling technology for applications such as human-computer interaction and healthcare. In wearable sensor-based HAR, deep learning is desired for extracting human active features. Due to the spatiotemporal dynamic of human activity, a special deep learning network for recognizing the temporal continuous activities of humans is required to improve the recognition accuracy for supporting advanced HAR applications. To this end, a residual multifeature fusion shrinkage network (RMFSN) is proposed. The RMFSN is an improved residual network which consists of a multi-branch framework, a channel attention shrinkage block (CASB), and a classifier network. The special multi-branch framework utilizes a 1D-CNN, a lightweight temporal attention mechanism, and a multi-scale feature extraction method to capture diverse activity features via multiple branches. The CASB is proposed to automatically select key features from the diverse features for each activity, and the classifier network outputs the final recognition results. Experimental results have shown that the accuracy of the proposed RMFSN for the public datasets UCI-HAR, WISDM, and OPPORTUNITY are 98.13%, 98.35%, and 93.89%, respectively. In comparison with existing advanced methods, the proposed RMFSN could achieve higher accuracy while requiring fewer model parameters.
Collapse
Affiliation(s)
| | - Mian Guo
- School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou 510660, China; (F.Z.); (L.T.); (F.G.)
| | | | | | | |
Collapse
|
4
|
Kuo CT, Lin JJ, Jen KK, Hsu WL, Wang FC, Tsao TC, Yen JY. Human Posture Transition-Time Detection Based upon Inertial Measurement Unit and Long Short-Term Memory Neural Networks. Biomimetics (Basel) 2023; 8:471. [PMID: 37887602 PMCID: PMC10604330 DOI: 10.3390/biomimetics8060471] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 09/16/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
As human-robot interaction becomes more prevalent in industrial and clinical settings, detecting changes in human posture has become increasingly crucial. While recognizing human actions has been extensively studied, the transition between different postures or movements has been largely overlooked. This study explores using two deep-learning methods, the linear Feedforward Neural Network (FNN) and Long Short-Term Memory (LSTM), to detect changes in human posture among three different movements: standing, walking, and sitting. To explore the possibility of rapid posture-change detection upon human intention, the authors introduced transition stages as distinct features for the identification. During the experiment, the subject wore an inertial measurement unit (IMU) on their right leg to measure joint parameters. The measurement data were used to train the two machine learning networks, and their performances were tested. This study also examined the effect of the sampling rates on the LSTM network. The results indicate that both methods achieved high detection accuracies. Still, the LSTM model outperformed the FNN in terms of speed and accuracy, achieving 91% and 95% accuracy for data sampled at 25 Hz and 100 Hz, respectively. Additionally, the network trained for one test subject was able to detect posture changes in other subjects, demonstrating the feasibility of personalized or generalized deep learning models for detecting human intentions. The accuracies for posture transition time and identification at a sampling rate of 100 Hz were 0.17 s and 94.44%, respectively. In summary, this study achieved some good outcomes and laid a crucial foundation for the engineering application of digital twins, exoskeletons, and human intention control.
Collapse
Affiliation(s)
- Chun-Ting Kuo
- Department of Mechanical Engineering, National Taiwan University, Taipei 106319, Taiwan
| | - Jun-Ji Lin
- Department of Mechanical Engineering, National Taiwan University, Taipei 106319, Taiwan
| | - Kuo-Kuang Jen
- Missile and Rocket Research Division, National Chung Shan Institute of Science and Technology, Taoyuan 325204, Taiwan
| | - Wei-Li Hsu
- School and Graduate Institute of Physical Therapy, National Taiwan University, Taipei 106319, Taiwan
| | - Fu-Cheng Wang
- Department of Mechanical Engineering, National Taiwan University, Taipei 106319, Taiwan
| | - Tsu-Chin Tsao
- Mechanical and Aerospace Engineering, Samueli School of Engineering, UCLA, Los Angeles, CA 90095, USA
| | - Jia-Yush Yen
- Department of Mechanical Engineering, National Taiwan University, Taipei 106319, Taiwan
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei 106319, Taiwan
| |
Collapse
|
5
|
Shakerian A, Douet V, Shoaraye Nejati A, Landry R. Real-Time Sensor-Embedded Neural Network for Human Activity Recognition. Sensors (Basel) 2023; 23:8127. [PMID: 37836957 PMCID: PMC10575419 DOI: 10.3390/s23198127] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 09/20/2023] [Accepted: 09/26/2023] [Indexed: 10/15/2023]
Abstract
This article introduces a novel approach to human activity recognition (HAR) by presenting a sensor that utilizes a real-time embedded neural network. The sensor incorporates a low-cost microcontroller and an inertial measurement unit (IMU), which is affixed to the subject's chest to capture their movements. Through the implementation of a convolutional neural network (CNN) on the microcontroller, the sensor is capable of detecting and predicting the wearer's activities in real-time, eliminating the need for external processing devices. The article provides a comprehensive description of the sensor and the methodology employed to achieve real-time prediction of subject behaviors. Experimental results demonstrate the accuracy and high inference performance of the proposed solution for real-time embedded activity recognition.
Collapse
|
6
|
Okita S, Yakunin R, Korrapati J, Ibrahim M, Schwerz de Lucena D, Chan V, Reinkensmeyer DJ. Counting Finger and Wrist Movements Using Only a Wrist-Worn, Inertial Measurement Unit: Toward Practical Wearable Sensing for Hand-Related Healthcare Applications. Sensors (Basel) 2023; 23:5690. [PMID: 37420857 DOI: 10.3390/s23125690] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 06/08/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
The ability to count finger and wrist movements throughout the day with a nonobtrusive, wearable sensor could be useful for hand-related healthcare applications, including rehabilitation after a stroke, carpal tunnel syndrome, or hand surgery. Previous approaches have required the user to wear a ring with an embedded magnet or inertial measurement unit (IMU). Here, we demonstrate that it is possible to identify the occurrence of finger and wrist flexion/extension movements based on vibrations detected by a wrist-worn IMU. We developed an approach we call "Hand Activity Recognition through using a Convolutional neural network with Spectrograms" (HARCS) that trains a CNN based on the velocity/acceleration spectrograms that finger/wrist movements create. We validated HARCS with the wrist-worn IMU recordings obtained from twenty stroke survivors during their daily life, where the occurrence of finger/wrist movements was labeled using a previously validated algorithm called HAND using magnetic sensing. The daily number of finger/wrist movements identified by HARCS had a strong positive correlation to the daily number identified by HAND (R2 = 0.76, p < 0.001). HARCS was also 75% accurate when we labeled the finger/wrist movements performed by unimpaired participants using optical motion capture. Overall, the ringless sensing of finger/wrist movement occurrence is feasible, although real-world applications may require further accuracy improvements.
Collapse
Affiliation(s)
- Shusuke Okita
- Department of Mechanical and Aerospace Engineering, University of California Irvine, Irvine, CA 92697, USA
- Department of Anatomy and Neurobiology, University of California Irvine, Irvine, CA 92697, USA
| | - Roman Yakunin
- College of Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Jathin Korrapati
- Department of Electrical Engineering and Computer Science, University of California Berkeley, Berkeley, CA 94720, USA
| | - Mina Ibrahim
- Department of Biomedical Engineering, University of California Irvine, Irvine, CA 92697, USA
| | - Diogo Schwerz de Lucena
- AE Studio, Venice, CA 90291, USA
- CAPES Foundation, Ministry of Education of Brazil, Brasilia 70040-020, Brazil
| | - Vicky Chan
- Rehabilitation Services, University of California Irvine, Irvine, CA 92697, USA
| | - David J Reinkensmeyer
- Department of Mechanical and Aerospace Engineering, University of California Irvine, Irvine, CA 92697, USA
- Department of Anatomy and Neurobiology, University of California Irvine, Irvine, CA 92697, USA
- Department of Biomedical Engineering, University of California Irvine, Irvine, CA 92697, USA
| |
Collapse
|
7
|
Chahoushi M, Nabati M, Asvadi R, Ghorashi SA. CSI-Based Human Activity Recognition Using Multi-Input Multi-Output Autoencoder and Fine-Tuning. Sensors (Basel) 2023; 23:3591. [PMID: 37050651 PMCID: PMC10099367 DOI: 10.3390/s23073591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/22/2023] [Accepted: 03/27/2023] [Indexed: 06/19/2023]
Abstract
Wi-Fi-based human activity recognition (HAR) has gained considerable attention recently due to its ease of use and the availability of its infrastructures and sensors. Channel state information (CSI) captures how Wi-Fi signals are transmitted through the environment. Using channel state information of the received signals transmitted from Wi-Fi access points, human activity can be recognized with more accuracy compared with the received signal strength indicator (RSSI). However, in many scenarios and applications, there is a serious limit in the volume of training data because of cost, time, or resource constraints. In this study, multiple deep learning models have been trained for HAR to achieve an acceptable accuracy level while using less training data compared to other machine learning techniques. To do so, a pretrained encoder which is trained using only a limited number of data samples, is utilized for feature extraction. Then, by using fine-tuning, this encoder is utilized in the classifier, which is trained by a fraction of the rest of the data, and the training is continued alongside the rest of the classifier's layers. Simulation results show that by using only 50% of the training data, there is a 20% improvement compared with the case where the encoder is not used. We also showed that by using an untrainable encoder, an accuracy improvement of 11% using 50% of the training data is achievable with a lower complexity level.
Collapse
Affiliation(s)
- Mahnaz Chahoushi
- Cognitive Telecommunication Research Group, Department of Telecommunications, Faculty of Electrical Engineering, Shahid Beheshti University, Tehran 19839 69411, Iran
| | - Mohammad Nabati
- Cognitive Telecommunication Research Group, Department of Telecommunications, Faculty of Electrical Engineering, Shahid Beheshti University, Tehran 19839 69411, Iran
| | - Reza Asvadi
- Cognitive Telecommunication Research Group, Department of Telecommunications, Faculty of Electrical Engineering, Shahid Beheshti University, Tehran 19839 69411, Iran
| | - Seyed Ali Ghorashi
- Cognitive Telecommunication Research Group, Department of Telecommunications, Faculty of Electrical Engineering, Shahid Beheshti University, Tehran 19839 69411, Iran
- Department of Computer Science & Digital Technologies, School of Architecture, Computing and Engineering, University of East London, London E16 2RD, UK
| |
Collapse
|
8
|
Huan S, Wu L, Zhang M, Wang Z, Yang C. Radar Human Activity Recognition with an Attention-Based Deep Learning Network. Sensors (Basel) 2023; 23:3185. [PMID: 36991896 PMCID: PMC10054704 DOI: 10.3390/s23063185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 02/28/2023] [Accepted: 03/15/2023] [Indexed: 06/19/2023]
Abstract
Radar-based human activity recognition (HAR) provides a non-contact method for many scenarios, such as human-computer interaction, smart security, and advanced surveillance with privacy protection. Feeding radar-preprocessed micro-Doppler signals into a deep learning (DL) network is a promising approach for HAR. Conventional DL algorithms can achieve high performance in terms of accuracy, but the complex network structure causes difficulty for their real-time embedded application. In this study, an efficient network with an attention mechanism is proposed. This network decouples the Doppler and temporal features of radar preprocessed signals according to the feature representation of human activity in the time-frequency domain. The Doppler feature representation is obtained in sequence using the one-dimensional convolutional neural network (1D CNN) following the sliding window. Then, HAR is realized by inputting the Doppler features into the attention-mechanism-based long short-term memory (LSTM) as a time sequence. Moreover, the activity features are effectively enhanced using the averaged cancellation method, which improves the clutter suppression effect under the micro-motion conditions. Compared with the traditional moving target indicator (MTI), the recognition accuracy is improved by about 3.7%. Experiments based on two human activity datasets confirm the superiority of our method compared to traditional methods in terms of expressiveness and computational efficiency. Specifically, our method achieves an accuracy close to 96.9% on both datasets and has a more lightweight network structure compared to algorithms with similar recognition accuracy. The method proposed in this article has great potential for real-time embedded applications of HAR.
Collapse
|
9
|
Karayaneva Y, Sharifzadeh S, Jing Y, Tan B. Human Activity Recognition for AI-Enabled Healthcare Using Low-Resolution Infrared Sensor Data. Sensors (Basel) 2023; 23:478. [PMID: 36617075 PMCID: PMC9824082 DOI: 10.3390/s23010478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/23/2022] [Accepted: 12/28/2022] [Indexed: 06/17/2023]
Abstract
This paper explores the feasibility of using low-resolution infrared (LRIR) image streams for human activity recognition (HAR) with potential application in e-healthcare. Two datasets based on synchronized multichannel LRIR sensors systems are considered for a comprehensive study about optimal data acquisition. A novel noise reduction technique is proposed for alleviating the effects of horizontal and vertical periodic noise in the 2D spatiotemporal activity profiles created by vectorizing and concatenating the LRIR frames. Two main analysis strategies are explored for HAR, including (1) manual feature extraction using texture-based and orthogonal-transformation-based techniques, followed by classification using support vector machine (SVM), random forest (RF), k-nearest neighbor (k-NN), and logistic regression (LR), and (2) deep neural network (DNN) strategy based on a convolutional long short-term memory (LSTM). The proposed periodic noise reduction technique showcases an increase of up to 14.15% using different models. In addition, for the first time, the optimum number of sensors, sensor layout, and distance to subjects are studied, indicating the optimum results based on a single side sensor at a close distance. Reasonable accuracies are achieved in the case of sensor displacement and robustness in detection of multiple subjects. Furthermore, the models show suitability for data collected in different environments.
Collapse
Affiliation(s)
- Yordanka Karayaneva
- School of Computing, Engineering and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK
| | - Sara Sharifzadeh
- Faculty of Science and Engineering, Swansea University, Swansea SA2 8PP, UK
| | - Yanguo Jing
- Faculty of Business, Computing and Digital Industries, Leeds Trinity University, Leeds LS18 5HD, UK
| | - Bo Tan
- Faculty of Information Technology and Communication Science, Tampere University, 33100 Tampere, Finland
| |
Collapse
|
10
|
Islam MS, Jannat MKA, Hossain MN, Kim WS, Lee SW, Yang SH. STC-NLSTMNet: An Improved Human Activity Recognition Method Using Convolutional Neural Network with NLSTM from WiFi CSI. Sensors (Basel) 2022; 23:356. [PMID: 36616954 PMCID: PMC9823549 DOI: 10.3390/s23010356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/13/2022] [Accepted: 12/21/2022] [Indexed: 06/17/2023]
Abstract
Human activity recognition (HAR) has emerged as a significant area of research due to its numerous possible applications, including ambient assisted living, healthcare, abnormal behaviour detection, etc. Recently, HAR using WiFi channel state information (CSI) has become a predominant and unique approach in indoor environments compared to others (i.e., sensor and vision) due to its privacy-preserving qualities, thereby eliminating the need to carry additional devices and providing flexibility of capture motions in both line-of-sight (LOS) and non-line-of-sight (NLOS) settings. Existing deep learning (DL)-based HAR approaches usually extract either temporal or spatial features and lack adequate means to integrate and utilize the two simultaneously, making it challenging to recognize different activities accurately. Motivated by this, we propose a novel DL-based model named spatio-temporal convolution with nested long short-term memory (STC-NLSTMNet), with the ability to extract spatial and temporal features concurrently and automatically recognize human activity with very high accuracy. The proposed STC-NLSTMNet model is mainly comprised of depthwise separable convolution (DS-Conv) blocks, feature attention module (FAM) and NLSTM. The DS-Conv blocks extract the spatial features from the CSI signal and add feature attention modules (FAM) to draw attention to the most essential features. These robust features are fed into NLSTM as inputs to explore the hidden intrinsic temporal features in CSI signals. The proposed STC-NLSTMNet model is evaluated using two publicly available datasets: Multi-environment and StanWiFi. The experimental results revealed that the STC-NLSTMNet model achieved activity recognition accuracies of 98.20% and 99.88% on Multi-environment and StanWiFi datasets, respectively. Its activity recognition performance is also compared with other existing approaches and our proposed STC-NLSTMNet model significantly improves the activity recognition accuracies by 4% and 1.88%, respectively, compared to the best existing method.
Collapse
Affiliation(s)
- Md Shafiqul Islam
- Department of Electronics Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
| | - Mir Kanon Ara Jannat
- Department of Electronics Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
| | - Mohammad Nahid Hossain
- Department of Electronics Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
| | - Woo-Su Kim
- Graduate School of Knowledge-Based Technology and Energy, Tech University of Korea, Siheung 15073, Republic of Korea
| | - Soo-Wook Lee
- Kwangwoon Academy, Kwangwoon University, Seoul 01897, Republic of Korea
| | - Sung-Hyun Yang
- Department of Electronics Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
| |
Collapse
|
11
|
Wu J, Maurenbrecher H, Schaer A, Becsek B, Awai Easthope C, Chatzipirpiridis G, Ergeneman O, Pané S, Nelson BJ. Human gait-labeling uncertainty and a hybrid model for gait segmentation. Front Neurosci 2022; 16:976594. [PMID: 36570841 PMCID: PMC9773262 DOI: 10.3389/fnins.2022.976594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 11/18/2022] [Indexed: 12/13/2022] Open
Abstract
Motion capture systems are widely accepted as ground-truth for gait analysis and are used for the validation of other gait analysis systems. To date, their reliability and limitations in manual labeling of gait events have not been studied. Objectives Evaluate manual labeling uncertainty and introduce a hybrid stride detection and gait-event estimation model for autonomous, long-term, and remote monitoring. Methods Estimate inter-labeler inconsistencies by computing the limits-of-agreement. Develop a hybrid model based on dynamic time warping and convolutional neural network to identify valid strides and eliminate non-stride data in inertial (walking) data collected by a wearable device. Finally, detect gait events within a valid stride region. Results The limits of inter-labeler agreement for key gait events heel off, toe off, heel strike, and flat foot are 72, 16, 24, and 80 ms, respectively; The hybrid model's classification accuracy for stride and non-stride are 95.16 and 84.48%, respectively; The mean absolute error for detected heel off, toe off, heel strike, and flat foot are 24, 5, 9, and 13 ms, respectively, when compared to the average human labels. Conclusions The results show the inherent labeling uncertainty and the limits of human gait labeling of motion capture data; The proposed hybrid-model's performance is comparable to that of human labelers, and it is a valid model to reliably detect strides and estimate the gait events in human gait data. Significance This work establishes the foundation for fully automated human gait analysis systems with performances comparable to human-labelers.
Collapse
Affiliation(s)
- Jiaen Wu
- Multi-Scale Robotics Lab, ETH Zurich, Zurich, Switzerland,Magnes AG, Zurich, Switzerland,*Correspondence: Jiaen Wu
| | | | | | | | - Chris Awai Easthope
- Cereneo Foundation, Center for Interdisciplinary Research (CEFIR), Vitznau, Switzerland
| | | | | | - Salvador Pané
- Multi-Scale Robotics Lab, ETH Zurich, Zurich, Switzerland
| | | |
Collapse
|
12
|
Patiño-Saucedo JA, Ariza-Colpas PP, Butt-Aziz S, Piñeres-Melo MA, López-Ruiz JL, Morales-Ortega RC, De-la-hoz-Franco E. Predictive Model for Human Activity Recognition Based on Machine Learning and Feature Selection Techniques. Int J Environ Res Public Health 2022; 19:12272. [PMID: 36231583 PMCID: PMC9565985 DOI: 10.3390/ijerph191912272] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 09/16/2022] [Accepted: 09/23/2022] [Indexed: 06/16/2023]
Abstract
Research into assisted living environments -within the area of Ambient Assisted Living (ALL)-focuses on generating innovative technology, products, and services to provide medical treatment and rehabilitation to the elderly, with the purpose of increasing the time in which these people can live independently, whether they suffer from neurodegenerative diseases or disabilities. This key area is responsible for the development of activity recognition systems (ARS) which are a valuable tool to identify the types of activities carried out by the elderly, and to provide them with effective care that allows them to carry out daily activities normally. This article aims to review the literature to outline the evolution of the different data mining techniques applied to this health area, by showing the metrics used by researchers in this area of knowledge in recent experiments.
Collapse
Affiliation(s)
| | | | - Shariq Butt-Aziz
- Department of Computer Science and IT, University of Lahore, Lahore 44000, Pakistan
| | | | - José Luis López-Ruiz
- Department of Computer Science, University of Jaén, Campus Las Lagunillas, 23071 Jaén, Spain
| | | | - Emiro De-la-hoz-Franco
- Department of Computer Science and Electronics, Universidad de la Costa CUC, Barranquilla 080002, Colombia
| |
Collapse
|
13
|
Shafiqul IM, Jannat MKA, Kim JW, Lee SW, Yang SH. HHI-AttentionNet: An Enhanced Human-Human Interaction Recognition Method Based on a Lightweight Deep Learning Model with Attention Network from CSI. Sensors (Basel) 2022; 22:6018. [PMID: 36015776 PMCID: PMC9414797 DOI: 10.3390/s22166018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 08/05/2022] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
Nowadays WiFi based human activity recognition (WiFi-HAR) has gained much attraction in an indoor environment due to its various benefits, including privacy and security, device free sensing, and cost-effectiveness. Recognition of human-human interactions (HHIs) using channel state information (CSI) signals is still challenging. Although some deep learning (DL) based architectures have been proposed in this regard, most of them suffer from limited recognition accuracy and are unable to support low computation resource devices due to having a large number of model parameters. To address these issues, we propose a dynamic method using a lightweight DL model (HHI-AttentionNet) to automatically recognize HHIs, which significantly reduces the parameters with increased recognition accuracy. In addition, we present an Antenna-Frame-Subcarrier Attention Mechanism (AFSAM) in our model that enhances the representational capability to recognize HHIs correctly. As a result, the HHI-AttentionNet model focuses on the most significant features, ignoring the irrelevant features, and reduces the impact of the complexity on the CSI signal. We evaluated the performance of the proposed HHI-AttentionNet model on a publicly available CSI-based HHI dataset collected from 40 individual pairs of subjects who performed 13 different HHIs. Its performance is also compared with other existing methods. These proved that the HHI-AttentionNet is the best model providing an average accuracy, F1 score, Cohen's Kappa, and Matthews correlation coefficient of 95.47%, 95.45%, 0.951%, and 0.950%, respectively, for recognition of 13 HHIs. It outperforms the best existing model's accuracy by more than 4%.
Collapse
Affiliation(s)
- Islam Md Shafiqul
- Department of Electronic Engineering, Kwangwoon University, Seoul 01897, Korea
| | | | - Jin-Woo Kim
- Department of Electronic Engineering, Kwangwoon University, Seoul 01897, Korea
| | - Soo-Wook Lee
- Kwangwoon Academy, Kwangwoon University, Seoul 01897, Korea
| | - Sung-Hyun Yang
- Department of Electronic Engineering, Kwangwoon University, Seoul 01897, Korea
| |
Collapse
|
14
|
Echeverria J, Santos OC. Toward Modeling Psychomotor Performance in Karate Combats Using Computer Vision Pose Estimation. Sensors (Basel) 2021; 21:s21248378. [PMID: 34960464 PMCID: PMC8709157 DOI: 10.3390/s21248378] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/29/2021] [Accepted: 12/03/2021] [Indexed: 01/19/2023]
Abstract
Technological advances enable the design of systems that interact more closely with humans in a multitude of previously unsuspected fields. Martial arts are not outside the application of these techniques. From the point of view of the modeling of human movement in relation to the learning of complex motor skills, martial arts are of interest because they are articulated around a system of movements that are predefined, or at least, bounded, and governed by the laws of Physics. Their execution must be learned after continuous practice over time. Literature suggests that artificial intelligence algorithms, such as those used for computer vision, can model the movements performed. Thus, they can be compared with a good execution as well as analyze their temporal evolution during learning. We are exploring the application of this approach to model psychomotor performance in Karate combats (called kumites), which are characterized by the explosiveness of their movements. In addition, modeling psychomotor performance in a kumite requires the modeling of the joint interaction of two participants, while most current research efforts in human movement computing focus on the modeling of movements performed individually. Thus, in this work, we explore how to apply a pose estimation algorithm to extract the features of some predefined movements of Ippon Kihon kumite (a one-step conventional assault) and compare classification metrics with four data mining algorithms, obtaining high values with them.
Collapse
Affiliation(s)
- Jon Echeverria
- Computer Science School, Universidad Nacional de Educación a Distancia (UNED), 28040 Madrid, Spain
- Correspondence:
| | - Olga C. Santos
- aDeNu Research Group, Artificial Intelligence Department, Computer Science School, Universidad Nacional de Educación a Distancia (UNED), 28040 Madrid, Spain;
| |
Collapse
|
15
|
Yen CT, Liao JX, Huang YK. Feature Fusion of a Deep-Learning Algorithm into Wearable Sensor Devices for Human Activity Recognition. Sensors (Basel) 2021; 21:8294. [PMID: 34960388 PMCID: PMC8706653 DOI: 10.3390/s21248294] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 12/02/2021] [Accepted: 12/08/2021] [Indexed: 11/17/2022]
Abstract
This paper presents a wearable device, fitted on the waist of a participant that recognizes six activities of daily living (walking, walking upstairs, walking downstairs, sitting, standing, and laying) through a deep-learning algorithm, human activity recognition (HAR). The wearable device comprises a single-board computer (SBC) and six-axis sensors. The deep-learning algorithm employs three parallel convolutional neural networks for local feature extraction and for subsequent concatenation to establish feature fusion models of varying kernel size. By using kernels of different sizes, relevant local features of varying lengths were identified, thereby increasing the accuracy of human activity recognition. Regarding experimental data, the database of University of California, Irvine (UCI) and self-recorded data were used separately. The self-recorded data were obtained by having 21 participants wear the device on their waist and perform six common activities in the laboratory. These data were used to verify the proposed deep-learning algorithm on the performance of the wearable device. The accuracy of these six activities in the UCI dataset and in the self-recorded data were 97.49% and 96.27%, respectively. The accuracies in tenfold cross-validation were 99.56% and 97.46%, respectively. The experimental results have successfully verified the proposed convolutional neural network (CNN) architecture, which can be used in rehabilitation assessment for people unable to exercise vigorously.
Collapse
Affiliation(s)
- Chih-Ta Yen
- Department of Electrical Engineering, National Taiwan Ocean University, Keelung City 202301, Taiwan
| | - Jia-Xian Liao
- Department of Electrical Engineering, National Formosa University, Yunlin County 632, Taiwan; (J.-X.L.); (Y.-K.H.)
| | - Yi-Kai Huang
- Department of Electrical Engineering, National Formosa University, Yunlin County 632, Taiwan; (J.-X.L.); (Y.-K.H.)
| |
Collapse
|
16
|
Wilhelm S, Kasbauer J. Exploiting Smart Meter Power Consumption Measurements for Human Activity Recognition (HAR) with a Motif-Detection-Based Non-Intrusive Load Monitoring (NILM) Approach. Sensors (Basel) 2021; 21:8036. [PMID: 34884039 DOI: 10.3390/s21238036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/16/2021] [Accepted: 11/26/2021] [Indexed: 11/17/2022]
Abstract
Numerous approaches exist for disaggregating power consumption data, referred to as non-intrusive load monitoring (NILM). Whereas NILM is primarily used for energy monitoring, we intend to disaggregate a household's power consumption to detect human activity in the residence. Therefore, this paper presents a novel approach for NILM, which uses pattern recognition on the raw power waveform of the smart meter measurements to recognize individual household appliance actions. The presented NILM approach is capable of (near) real-time appliance action detection in a streaming setting, using edge computing. It is unique in our approach that we quantify the disaggregating uncertainty using continuous pattern correlation instead of binary device activity states. Further, we outline using the disaggregated appliance activity data for human activity recognition (HAR). To evaluate our approach, we use a dataset collected from actual households. We show that the developed NILM approach works, and the disaggregation quality depends on the pattern selection and the appliance type. In summary, we demonstrate that it is possible to detect human activity within the residence using a motif-detection-based NILM approach applied to smart meter measurements.
Collapse
|
17
|
Zhang L, Zhu Y, Jiang M, Wu Y, Deng K, Ni Q. Body Temperature Monitoring for Regular COVID-19 Prevention Based on Human Daily Activity Recognition. Sensors (Basel) 2021; 21:7540. [PMID: 34833616 PMCID: PMC8622194 DOI: 10.3390/s21227540] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 10/11/2021] [Accepted: 11/04/2021] [Indexed: 11/16/2022]
Abstract
Existing wearable systems that use G-sensors to identify daily activities have been widely applied for medical, sports and military applications, while body temperature as an obvious physical characteristic that has rarely been considered in the system design and relative applications of HAR. In the context of the normalization of COVID-19, the prevention and control of the epidemic has become a top priority. Temperature monitoring plays an important role in the preliminary screening of the population for fever. Therefore, this paper proposes a wearable device embedded with inertial and temperature sensors that is used to apply human behavior recognition (HAR) to body surface temperature detection for body temperature monitoring and adjustment by evaluating recognition algorithms. The sensing system consists of an STM 32-based microcontroller, a 6-axis (accelerometer and gyroscope) sensor, and a temperature sensor to capture the original data from 10 individual participants under 4 different daily activity scenarios. Then, the collected raw data are pre-processed by signal standardization, data stacking and resampling. For HAR, several machine learning (ML) and deep learning (DL) algorithms are implemented to classify the activities. To compare the performance of different classifiers on the seven-dimensional dataset with temperature sensing signals, evaluation metrics and the algorithm running time are considered, and random forest (RF) is found to be the best-performing classifier with 88.78% recognition accuracy, which is higher than the case of the absence of temperature data (<78%). In addition, the experimental results show that participants' body surface temperature in dynamic activities was lower compared to sitting, which can be associated with the possible missing fever population due to temperature deviations in COVID-19 prevention. According to different individual activities, epidemic prevention workers are supposed to infer the corresponding standard normal body temperature of a patient by referring to the specific values of the mean expectation and variance in the normal distribution curve provided in this paper.
Collapse
Affiliation(s)
- Lei Zhang
- College of Information Science and Technology, Donghua University, Shanghai 201620, China; (L.Z.); (Y.Z.); (M.J.); (Y.W.); (K.D.)
| | - Yanjin Zhu
- College of Information Science and Technology, Donghua University, Shanghai 201620, China; (L.Z.); (Y.Z.); (M.J.); (Y.W.); (K.D.)
| | - Mingliang Jiang
- College of Information Science and Technology, Donghua University, Shanghai 201620, China; (L.Z.); (Y.Z.); (M.J.); (Y.W.); (K.D.)
| | - Yuchen Wu
- College of Information Science and Technology, Donghua University, Shanghai 201620, China; (L.Z.); (Y.Z.); (M.J.); (Y.W.); (K.D.)
| | - Kailian Deng
- College of Information Science and Technology, Donghua University, Shanghai 201620, China; (L.Z.); (Y.Z.); (M.J.); (Y.W.); (K.D.)
| | - Qin Ni
- College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 201418, China
| |
Collapse
|
18
|
Afzali Arani MS, Costa DE, Shihab E. Human Activity Recognition: A Comparative Study to Assess the Contribution Level of Accelerometer, ECG, and PPG Signals. Sensors (Basel) 2021; 21:6997. [PMID: 34770303 DOI: 10.3390/s21216997] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 10/01/2021] [Accepted: 10/14/2021] [Indexed: 11/24/2022]
Abstract
Inertial sensors are widely used in the field of human activity recognition (HAR), since this source of information is the most informative time series among non-visual datasets. HAR researchers are actively exploring other approaches and different sources of signals to improve the performance of HAR systems. In this study, we investigate the impact of combining bio-signals with a dataset acquired from inertial sensors on recognizing human daily activities. To achieve this aim, we used the PPG-DaLiA dataset consisting of 3D-accelerometer (3D-ACC), electrocardiogram (ECG), photoplethysmogram (PPG) signals acquired from 15 individuals while performing daily activities. We extracted hand-crafted time and frequency domain features, then, we applied a correlation-based feature selection approach to reduce the feature-set dimensionality. After introducing early fusion scenarios, we trained and tested random forest models with subject-dependent and subject-independent setups. Our results indicate that combining features extracted from the 3D-ACC signal with the ECG signal improves the classifier’s performance F1-scores by 2.72% and 3.00% (from 94.07% to 96.80%, and 83.16% to 86.17%) for subject-dependent and subject-independent approaches, respectively.
Collapse
|
19
|
Hannan A, Shafiq MZ, Hussain F, Pires IM. A Portable Smart Fitness Suite for Real-Time Exercise Monitoring and Posture Correction. Sensors (Basel) 2021; 21:s21196692. [PMID: 34641012 PMCID: PMC8512175 DOI: 10.3390/s21196692] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 10/02/2021] [Accepted: 10/03/2021] [Indexed: 11/16/2022]
Abstract
Fitness and sport have drawn significant attention in wearable and persuasive computing. Physical activities are worthwhile for health, well-being, improved fitness levels, lower mental pressure and tension levels. Nonetheless, during high-power and commanding workouts, there is a high likelihood that physical fitness is seriously influenced. Jarring motions and improper posture during workouts can lead to temporary or permanent disability. With the advent of technological advances, activity acknowledgment dependent on wearable sensors has pulled in countless studies. Still, a fully portable smart fitness suite is not industrialized, which is the central need of today's time, especially in the Covid-19 pandemic. Considering the effectiveness of this issue, we proposed a fully portable smart fitness suite for the household to carry on their routine exercises without any physical gym trainer and gym environment. The proposed system considers two exercises, i.e., T-bar and bicep curl with the assistance of the virtual real-time android application, acting as a gym trainer overall. The proposed fitness suite is embedded with a gyroscope and EMG sensory modules for performing the above two exercises. It provided alerts on unhealthy, wrong posture movements over an android app and is guided to the best possible posture based on sensor values. The KNN classification model is used for prediction and guidance for the user while performing a particular exercise with the help of an android application-based virtual gym trainer through a text-to-speech module. The proposed system attained 89% accuracy, which is quite effective with portability and a virtually assisted gym trainer feature.
Collapse
Affiliation(s)
- Abdul Hannan
- Knowledge Unit of System and Technology, University of Management and Technology, Sialkot 51310, Pakistan
- Correspondence: (A.H.); (F.H.); (I.M.P.)
| | - Muhammad Zohaib Shafiq
- Department of Computer Science and Engineering, Università di Bologna, 40126 Bologna, Italy;
| | - Faisal Hussain
- Al-Khwarizmi Institute of Computer Science (KICS), University of Engineering & Technology (UET), Lahore 54890, Pakistan
- Correspondence: (A.H.); (F.H.); (I.M.P.)
| | - Ivan Miguel Pires
- Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal
- Escola de Ciências e Tecnologias, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal
- Correspondence: (A.H.); (F.H.); (I.M.P.)
| |
Collapse
|
20
|
Yin C, Chen J, Miao X, Jiang H, Chen D. Device-Free Human Activity Recognition with Low-Resolution Infrared Array Sensor Using Long Short-Term Memory Neural Network. Sensors (Basel) 2021; 21:3551. [PMID: 34065183 DOI: 10.3390/s21103551] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 05/06/2021] [Accepted: 05/14/2021] [Indexed: 11/17/2022]
Abstract
Sensor-based human activity recognition (HAR) has attracted enormous interests due to its wide applications in the Internet of Things (IoT), smart homes and healthcare. In this paper, a low-resolution infrared array sensor-based HAR approach is proposed using the deep learning framework. The device-free sensing system leverages the infrared array sensor of 8×8 pixels to collect the infrared signals, which can ensure users' privacy and effectively reduce the deployment cost of the network. To reduce the influence of temperature variations, a combination of the J-filter noise reduction method and the Butterworth filter is performed to preprocess the infrared signals. Long short-term memory (LSTM), a representative recurrent neural network, is utilized to automatically extract characteristics from the infrared signal and build the recognition model. In addition, the real-time HAR interface is designed by embedding the LSTM model. Experimental results show that the typical daily activities can be classified with the recognition accuracy of 98.287%. The proposed approach yields a better result compared to the existing machine learning methods, and it provides a low-cost yet promising solution for privacy-preserving scenarios.
Collapse
|
21
|
Chen J, Huang X, Jiang H, Miao X. Low-Cost and Device-Free Human Activity Recognition Based on Hierarchical Learning Model. Sensors (Basel) 2021; 21:2359. [PMID: 33800704 DOI: 10.3390/s21072359] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 03/16/2021] [Accepted: 03/24/2021] [Indexed: 11/27/2022]
Abstract
Human activity recognition (HAR) has been a vital human–computer interaction service in smart homes. It is still a challenging task due to the diversity and similarity of human actions. In this paper, a novel hierarchical deep learning-based methodology equipped with low-cost sensors is proposed for high-accuracy device-free human activity recognition. ESP8266, as the sensing hardware, was utilized to deploy the WiFi sensor network and collect multi-dimensional received signal strength indicator (RSSI) records. The proposed learning model presents a coarse-to-fine hierarchical classification framework with two-level perception modules. In the coarse-level stage, twelve statistical features of time–frequency domains were extracted from the RSSI measurements filtered by a butterworth low-pass filter, and a support vector machine (SVM) model was employed to quickly recognize the basic human activities by classifying the signal statistical features. In the fine-level stage, the gated recurrent unit (GRU), a representative type of recurrent neural network (RNN), was applied to address issues of the confused recognition of similar activities. The GRU model can realize automatic multi-level feature extraction from the RSSI measurements and accurately discriminate the similar activities. The experimental results show that the proposed approach achieved recognition accuracies of 96.45% and 94.59% for six types of activities in two different environments and performed better compared the traditional pattern-based methods. The proposed hierarchical learning method provides a low-cost sensor-based HAR framework to enhance the recognition accuracy and modeling efficiency.
Collapse
|
22
|
Fu Z, He X, Wang E, Huo J, Huang J, Wu D. Personalized Human Activity Recognition Based on Integrated Wearable Sensor and Transfer Learning. Sensors (Basel) 2021; 21:885. [PMID: 33525538 PMCID: PMC7865943 DOI: 10.3390/s21030885] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 01/04/2021] [Accepted: 01/22/2021] [Indexed: 11/16/2022]
Abstract
Human activity recognition (HAR) based on the wearable device has attracted more attention from researchers with sensor technology development in recent years. However, personalized HAR requires high accuracy of recognition, while maintaining the model's generalization capability is a major challenge in this field. This paper designed a compact wireless wearable sensor node, which combines an air pressure sensor and inertial measurement unit (IMU) to provide multi-modal information for HAR model training. To solve personalized recognition of user activities, we propose a new transfer learning algorithm, which is a joint probability domain adaptive method with improved pseudo-labels (IPL-JPDA). This method adds the improved pseudo-label strategy to the JPDA algorithm to avoid cumulative errors due to inaccurate initial pseudo-labels. In order to verify our equipment and method, we use the newly designed sensor node to collect seven daily activities of 7 subjects. Nine different HAR models are trained by traditional machine learning and transfer learning methods. The experimental results show that the multi-modal data improve the accuracy of the HAR system. The IPL-JPDA algorithm proposed in this paper has the best performance among five HAR models, and the average recognition accuracy of different subjects is 93.2%.
Collapse
Affiliation(s)
| | | | | | | | - Jian Huang
- Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; (Z.F.); (X.H.); (E.W.); (J.H.); (D.W.)
| | | |
Collapse
|
23
|
Ahmed Bhuiyan R, Ahmed N, Amiruzzaman M, Islam MR. A Robust Feature Extraction Model for Human Activity Characterization Using 3-Axis Accelerometer and Gyroscope Data. Sensors (Basel) 2020; 20:E6990. [PMID: 33297389 DOI: 10.3390/s20236990] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Revised: 11/27/2020] [Accepted: 12/03/2020] [Indexed: 12/19/2022]
Abstract
Human Activity Recognition (HAR) using embedded sensors in smartphones and smartwatch has gained popularity in extensive applications in health care monitoring of elderly people, security purpose, robotics, monitoring employees in the industry, and others. However, human behavior analysis using the accelerometer and gyroscope data are typically grounded on supervised classification techniques, where models are showing sub-optimal performance for qualitative and quantitative features. Considering this factor, this paper proposes an efficient and reduce dimension feature extraction model for human activity recognition. In this feature extraction technique, the Enveloped Power Spectrum (EPS) is used for extracting impulse components of the signal using frequency domain analysis which is more robust and noise insensitive. The Linear Discriminant Analysis (LDA) is used as dimensionality reduction procedure to extract the minimum number of discriminant features from envelop spectrum for human activity recognition (HAR). The extracted features are used for human activity recognition using Multi-class Support Vector Machine (MCSVM). The proposed model was evaluated by using two benchmark datasets, i.e., the UCI-HAR and DU-MD datasets. This model is compared with other state-of-the-art methods and the model is outperformed.
Collapse
|
24
|
Manivannan A, Chin WCB, Barrat A, Bouffanais R. On the Challenges and Potential of Using Barometric Sensors to Track Human Activity. Sensors (Basel) 2020; 20:s20236786. [PMID: 33261064 PMCID: PMC7731380 DOI: 10.3390/s20236786] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 11/15/2020] [Accepted: 11/23/2020] [Indexed: 11/16/2022]
Abstract
Barometers are among the oldest engineered sensors. Historically, they have been primarily used either as environmental sensors to measure the atmospheric pressure for weather forecasts or as altimeters for aircrafts. With the advent of microelectromechanical system (MEMS)-based barometers and their systematic embedding in smartphones and wearable devices, a vast breadth of new applications for the use of barometers has emerged. For instance, it is now possible to use barometers in conjunction with other sensors to track and identify a wide range of human activity classes. However, the effectiveness of barometers in the growing field of human activity recognition critically hinges on our understanding of the numerous factors affecting the atmospheric pressure, as well as on the properties of the sensor itself-sensitivity, accuracy, variability, etc. This review article thoroughly details all these factors and presents a comprehensive report of the numerous studies dealing with one or more of these factors in the particular framework of human activity tracking and recognition. In addition, we specifically collected some experimental data to illustrate the effects of these factors, which we observed to be in good agreement with the findings in the literature. We conclude this review with some suggestions on some possible future uses of barometric sensors for the specific purpose of tracking human activities.
Collapse
Affiliation(s)
- Ajaykumar Manivannan
- Engineering Product Development, Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372, Singapore; (A.M.); (W.C.B.C.)
| | - Wei Chien Benny Chin
- Engineering Product Development, Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372, Singapore; (A.M.); (W.C.B.C.)
| | - Alain Barrat
- CNRS, CPT, Aix Marseille University, Université de Toulon, 13009 Marseille, France;
- Tokyo Tech World Research Hub Initiative (WRHI), Tokyo Institute of Technology, Yokohama 226-8503, Japan
| | - Roland Bouffanais
- Engineering Product Development, Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372, Singapore; (A.M.); (W.C.B.C.)
- Correspondence: ; Tel.: +65-6303-6667
| |
Collapse
|
25
|
Fridriksdottir E, Bonomi AG. Accelerometer-Based Human Activity Recognition for Patient Monitoring Using a Deep Neural Network. Sensors (Basel) 2020; 20:E6424. [PMID: 33182813 DOI: 10.3390/s20226424] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 11/05/2020] [Accepted: 11/08/2020] [Indexed: 01/04/2023]
Abstract
The objective of this study was to investigate the accuracy of a Deep Neural Network (DNN) in recognizing activities typical for hospitalized patients. A data collection study was conducted with 20 healthy volunteers (10 males and 10 females, age = 43 ± 13 years) in a simulated hospital environment. A single triaxial accelerometer mounted on the trunk was used to measure body movement and recognize six activity types: lying in bed, upright posture, walking, wheelchair transport, stair ascent and stair descent. A DNN consisting of a three-layer convolutional neural network followed by a long short-term memory layer was developed for this classification problem. Additionally, features were extracted from the accelerometer data to train a support vector machine (SVM) classifier for comparison. The DNN reached 94.52% overall accuracy on the holdout dataset compared to 83.35% of the SVM classifier. In conclusion, a DNN is capable of recognizing types of physical activity in simulated hospital conditions using data captured by a single tri-axial accelerometer. The method described may be used for continuous monitoring of patient activities during hospitalization to provide additional insights into the recovery process.
Collapse
|
26
|
Hao Z, Duan Y, Dang X, Liu Y, Zhang D. Wi-SL: Contactless Fine-Grained Gesture Recognition Uses Channel State Information. Sensors (Basel) 2020; 20:E4025. [PMID: 32698482 DOI: 10.3390/s20144025] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 07/14/2020] [Accepted: 07/17/2020] [Indexed: 11/23/2022]
Abstract
In recent years, with the development of wireless sensing technology and the widespread popularity of WiFi devices, human perception based on WiFi has become possible, and gesture recognition has become an active topic in the field of human-computer interaction. As a kind of gesture, sign language is widely used in life. The establishment of an effective sign language recognition system can help people with aphasia and hearing impairment to better interact with the computer and facilitate their daily life. For this reason, this paper proposes a contactless fine-grained gesture recognition method using Channel State Information (CSI), namely Wi-SL. This method uses a commercial WiFi device to establish the correlation mapping between the amplitude and phase difference information of the subcarrier level in the wireless signal and the sign language action, without requiring the user to wear any device. We combine an efficient denoising method to filter environmental interference with an effective selection of optimal subcarriers to reduce the computational cost of the system. We also use K-means combined with a Bagging algorithm to optimize the Support Vector Machine (SVM) classification (KSB) model to enhance the classification of sign language action data. We implemented the algorithms and evaluated them for three different scenarios. The experimental results show that the average accuracy of Wi-SL gesture recognition can reach 95.8%, which realizes device-free, non-invasive, high-precision sign language gesture recognition.
Collapse
|
27
|
Hossain T, Ahad MAR, Inoue S. A Method for Sensor-Based Activity Recognition in Missing Data Scenario. Sensors (Basel) 2020; 20:E3811. [PMID: 32650486 DOI: 10.3390/s20143811] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 06/09/2020] [Accepted: 06/30/2020] [Indexed: 11/30/2022]
Abstract
Sensor-based human activity recognition has various applications in the arena of healthcare, elderly smart-home, sports, etc. There are numerous works in this field—to recognize various human activities from sensor data. However, those works are based on data patterns that are clean data and have almost no missing data, which is a genuine concern for real-life healthcare centers. Therefore, to address this problem, we explored the sensor-based activity recognition when some partial data were lost in a random pattern. In this paper, we propose a novel method to improve activity recognition while having missing data without any data recovery. For the missing data pattern, we considered data to be missing in a random pattern, which is a realistic missing pattern for sensor data collection. Initially, we created different percentages of random missing data only in the test data, while the training was performed on good quality data. In our proposed approach, we explicitly induce different percentages of missing data randomly in the raw sensor data to train the model with missing data. Learning with missing data reinforces the model to regulate missing data during the classification of various activities that have missing data in the test module. This approach demonstrates the plausibility of the machine learning model, as it can learn and predict from an identical domain. We exploited several time-series statistical features to extricate better features in order to comprehend various human activities. We explored both support vector machine and random forest as machine learning models for activity classification. We developed a synthetic dataset to empirically evaluate the performance and show that the method can effectively improve the recognition accuracy from 80.8% to 97.5%. Afterward, we tested our approach with activities from two challenging benchmark datasets: the human activity sensing consortium (HASC) dataset and single chest-mounted accelerometer dataset. We examined the method for different missing percentages, varied window sizes, and diverse window sliding widths. Our explorations demonstrated improved recognition performances even in the presence of missing data. The achieved results provide persuasive findings on sensor-based activity recognition in the presence of missing data.
Collapse
|
28
|
Ahmed N, Rafiq JI, Islam MR. Enhanced Human Activity Recognition Based on Smartphone Sensor Data Using Hybrid Feature Selection Model. Sensors (Basel) 2020; 20:s20010317. [PMID: 31935943 PMCID: PMC6983014 DOI: 10.3390/s20010317] [Citation(s) in RCA: 86] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/30/2019] [Accepted: 12/30/2019] [Indexed: 11/16/2022]
Abstract
Human activity recognition (HAR) techniques are playing a significant role in monitoring the daily activities of human life such as elderly care, investigation activities, healthcare, sports, and smart homes. Smartphones incorporated with varieties of motion sensors like accelerometers and gyroscopes are widely used inertial sensors that can identify different physical conditions of human. In recent research, many works have been done regarding human activity recognition. Sensor data of smartphone produces high dimensional feature vectors for identifying human activities. However, all the vectors are not contributing equally for identification process. Including all feature vectors create a phenomenon known as 'curse of dimensionality'. This research has proposed a hybrid method feature selection process, which includes a filter and wrapper method. The process uses a sequential floating forward search (SFFS) to extract desired features for better activity recognition. Features are then fed to a multiclass support vector machine (SVM) to create nonlinear classifiers by adopting the kernel trick for training and testing purpose. We validated our model with a benchmark dataset. Our proposed system works efficiently with limited hardware resource and provides satisfactory activity identification.
Collapse
Affiliation(s)
- Nadeem Ahmed
- Centre for Higher Studies and Research, Bangladesh University of Professionals, Mirpur Cantonment, Dhaka-1216, Bangladesh;
| | - Jahir Ibna Rafiq
- Department of Computer Science and Engineering, University of Asia Pacific, 74/A, Green Road, Dhaka-1205, Bangladesh;
| | - Md Rashedul Islam
- School of Computer Science and Engineering, University of Aizu, Fukushima 965-8580, Japan
- Correspondence: ; Tel.: +81-80-9532-0675
| |
Collapse
|
29
|
Al-Qaness MAA, Abd Elaziz M, Kim S, Ewees AA, Abbasi AA, Alhaj YA, Hawbani A. Channel State Information from Pure Communication to Sense and Track Human Motion: A Survey. Sensors (Basel) 2019; 19:E3329. [PMID: 31362425 DOI: 10.3390/s19153329] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 07/21/2019] [Accepted: 07/23/2019] [Indexed: 11/29/2022]
Abstract
Human motion detection and activity recognition are becoming vital for the applications in smart homes. Traditional Human Activity Recognition (HAR) mechanisms use special devices to track human motions, such as cameras (vision-based) and various types of sensors (sensor-based). These mechanisms are applied in different applications, such as home security, Human–Computer Interaction (HCI), gaming, and healthcare. However, traditional HAR methods require heavy installation, and can only work under strict conditions. Recently, wireless signals have been utilized to track human motion and HAR in indoor environments. The motion of an object in the test environment causes fluctuations and changes in the Wi-Fi signal reflections at the receiver, which result in variations in received signals. These fluctuations can be used to track object (i.e., a human) motion in indoor environments. This phenomenon can be improved and leveraged in the future to improve the internet of things (IoT) and smart home devices. The main Wi-Fi sensing methods can be broadly categorized as Received Signal Strength Indicator (RSSI), Wi-Fi radar (by using Software Defined Radio (SDR)) and Channel State Information (CSI). CSI and RSSI can be considered as device-free mechanisms because they do not require cumbersome installation, whereas the Wi-Fi radar mechanism requires special devices (i.e., Universal Software Radio Peripheral (USRP)). Recent studies demonstrate that CSI outperforms RSSI in sensing accuracy due to its stability and rich information. This paper presents a comprehensive survey of recent advances in the CSI-based sensing mechanism and illustrates the drawbacks, discusses challenges, and presents some suggestions for the future of device-free sensing technology.
Collapse
|