1
|
Jun WH, Hong YS. Detection of Sleep Posture via Humidity Fluctuation Analysis in a Sensor-Embedded Pillow. Bioengineering (Basel) 2025; 12:480. [PMID: 40428098 PMCID: PMC12108824 DOI: 10.3390/bioengineering12050480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2025] [Revised: 04/23/2025] [Accepted: 04/28/2025] [Indexed: 05/29/2025] Open
Abstract
This study presents a novel method for detecting sleep posture changes-specifically tossing and turning-by monitoring variations in humidity using an array of humidity sensors embedded at regular intervals within a memory-foam pillow. Unlike previous approaches that rely primarily on temperature or pressure sensors, our method leverages the observation that humidity fluctuations are more pronounced during movement, enabling the more sensitive detection of posture changes. We demonstrate that dynamic patterns in humidity data correlate strongly with physical motion during sleep. To identify these transitions, we applied the Pruned Exact Linear Time (PELT) algorithm, which effectively segmented the time series based on abrupt changes in humidity. Furthermore, we converted humidity fluctuation curves into image representations and employed a transfer-learning-based model to classify sleep postures, achieving accurate recognition performance. Our findings highlight the potential of humidity sensing as a reliable modality for non-invasive sleep monitoring. In this study, we propose a novel method for detecting tossing and turning during sleep by analyzing changes in humidity captured by a linear array of sensors embedded in a memory foam pillow. Compared to temperature data, humidity data exhibited more significant fluctuations, which were leveraged to track head movement and infer sleep posture. We applied a rolling smoothing technique and quantified the cumulative deviation across sensors to identify posture transitions. Furthermore, the PELT algorithm was utilized for precise change-point detection. To classify sleep posture, we converted the humidity time series into images and implemented a transfer learning model using a Vision Transformer, achieving a classification accuracy of approximately 96%. Our results demonstrate the feasibility of a sleep posture analysis using only humidity data, offering a non-intrusive and effective approach for sleep monitoring.
Collapse
Affiliation(s)
| | - Youn-Sik Hong
- Department of Computer Science and Engineering, Incheon National University, Incheon 22012, Republic of Korea;
| |
Collapse
|
2
|
Karatas I. Deep learning-based system for prediction of work at height in construction site. Heliyon 2025; 11:e41779. [PMID: 39906815 PMCID: PMC11791131 DOI: 10.1016/j.heliyon.2025.e41779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Revised: 01/06/2025] [Accepted: 01/07/2025] [Indexed: 02/06/2025] Open
Abstract
Falling from height (FFH) is a major cause of injuries and fatalities on construction sites. Research has emphasized the role of technological advances in managing FFH safety risks. In this investigation, the objective is to forecast if a laborer is operating at an elevated position by utilizing an accelerometer, gyroscope, and pressure information through the application of deep-learning techniques. The study involved analyzing worker data to quickly implement safety measures for working at heights. A total of 45 analyses were conducted using DNN, CNN, and LSTM deep-learning models, with 5 different window sizes and 3 different overlap rates. The analysis revealed that the DNN model, utilizing a 1-s window size and a 75 % overlap rate, attained an accuracy of 94.6 % with a loss of 0.1445. Conversely, the CNN model, employing a 5-s window size and a 75 % overlap rate, demonstrated an accuracy of 94.9 % with a loss of 0.1696. The results of this study address information gaps by efficiently predicting workers' working conditions at heights without the need for complex calculations. By implementing this method at construction sites, it is expected to reduce the risk of FFH and align occupational health and safety practices with technological advancements.
Collapse
Affiliation(s)
- Ibrahim Karatas
- Osmaniye Korkut Ata University, Faculty of Engineering and Natural Sciences, Department of Civil Engineering, Osmaniye, Turkey
| |
Collapse
|
3
|
Liu H, Zhao B, Dai C, Sun B, Li A, Wang Z. MAG-Res2Net: a novel deep learning network for human activity recognition. Physiol Meas 2023; 44:115007. [PMID: 37939391 DOI: 10.1088/1361-6579/ad0ab8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 11/07/2023] [Indexed: 11/10/2023]
Abstract
Objective.Human activity recognition (HAR) has become increasingly important in healthcare, sports, and fitness domains due to its wide range of applications. However, existing deep learning based HAR methods often overlook the challenges posed by the diversity of human activities and data quality, which can make feature extraction difficult. To address these issues, we propose a new neural network model called MAG-Res2Net, which incorporates the Borderline-SMOTE data upsampling algorithm, a loss function combination algorithm based on metric learning, and the Lion optimization algorithm.Approach.We evaluated the proposed method on two commonly utilized public datasets, UCI-HAR and WISDM, and leveraged the CSL-SHARE multimodal human activity recognition dataset for comparison with state-of-the-art models.Main results.On the UCI-HAR dataset, our model achieved accuracy, F1-macro, and F1-weighted scores of 94.44%, 94.38%, and 94.26%, respectively. On the WISDM dataset, the corresponding scores were 98.32%, 97.26%, and 98.42%, respectively.Significance.The proposed MAG-Res2Net model demonstrates robust multimodal performance, with each module successfully enhancing model capabilities. Additionally, our model surpasses current human activity recognition neural networks on both evaluation metrics and training efficiency. Source code of this work is available at:https://github.com/LHY1007/MAG-Res2Net.
Collapse
Affiliation(s)
- Hanyu Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110819, People's Republic of China
| | - Boyang Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110819, People's Republic of China
| | - Chubo Dai
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110819, People's Republic of China
| | - Boxin Sun
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110819, People's Republic of China
| | - Ang Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110819, People's Republic of China
| | - Zhiqiong Wang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
4
|
Liang W, Wang F, Fan A, Zhao W, Yao W, Yang P. Extended Application of Inertial Measurement Units in Biomechanics: From Activity Recognition to Force Estimation. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094229. [PMID: 37177436 PMCID: PMC10180901 DOI: 10.3390/s23094229] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/20/2023] [Accepted: 04/22/2023] [Indexed: 05/15/2023]
Abstract
Abnormal posture or movement is generally the indicator of musculoskeletal injuries or diseases. Mechanical forces dominate the injury and recovery processes of musculoskeletal tissue. Using kinematic data collected from wearable sensors (notably IMUs) as input, activity recognition and musculoskeletal force (typically represented by ground reaction force, joint force/torque, and muscle activity/force) estimation approaches based on machine learning models have demonstrated their superior accuracy. The purpose of the present study is to summarize recent achievements in the application of IMUs in biomechanics, with an emphasis on activity recognition and mechanical force estimation. The methodology adopted in such applications, including data pre-processing, noise suppression, classification models, force/torque estimation models, and the corresponding application effects, are reviewed. The extent of the applications of IMUs in daily activity assessment, posture assessment, disease diagnosis, rehabilitation, and exoskeleton control strategy development are illustrated and discussed. More importantly, the technical feasibility and application opportunities of musculoskeletal force prediction using IMU-based wearable devices are indicated and highlighted. With the development and application of novel adaptive networks and deep learning models, the accurate estimation of musculoskeletal forces can become a research field worthy of further attention.
Collapse
Affiliation(s)
- Wenqi Liang
- Key Laboratory for Space Bioscience and Biotechnology, School of Life Sciences, Northwestern Polytechnical University, Xi'an 710072, China
| | - Fanjie Wang
- Key Laboratory for Space Bioscience and Biotechnology, School of Life Sciences, Northwestern Polytechnical University, Xi'an 710072, China
| | - Ao Fan
- Key Laboratory for Space Bioscience and Biotechnology, School of Life Sciences, Northwestern Polytechnical University, Xi'an 710072, China
| | - Wenrui Zhao
- Key Laboratory for Space Bioscience and Biotechnology, School of Life Sciences, Northwestern Polytechnical University, Xi'an 710072, China
| | - Wei Yao
- Key Laboratory for Space Bioscience and Biotechnology, School of Life Sciences, Northwestern Polytechnical University, Xi'an 710072, China
| | - Pengfei Yang
- Key Laboratory for Space Bioscience and Biotechnology, School of Life Sciences, Northwestern Polytechnical University, Xi'an 710072, China
| |
Collapse
|
5
|
Zhang Z, Wang W, An A, Qin Y, Yang F. A human activity recognition method using wearable sensors based on convtransformer model. EVOLVING SYSTEMS 2023. [DOI: 10.1007/s12530-022-09480-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
6
|
Alemayoh TT, Shintani M, Lee JH, Okamoto S. Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22207840. [PMID: 36298192 PMCID: PMC9612168 DOI: 10.3390/s22207840] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 10/10/2022] [Accepted: 10/13/2022] [Indexed: 06/01/2023]
Abstract
Digitizing handwriting is mostly performed using either image-based methods, such as optical character recognition, or utilizing two or more devices, such as a special stylus and a smart pad. The high-cost nature of this approach necessitates a cheaper and standalone smart pen. Therefore, in this paper, a deep-learning-based compact smart digital pen that recognizes 36 alphanumeric characters was developed. Unlike common methods, which employ only inertial data, handwriting recognition is achieved from hand motion data captured using an inertial force sensor. The developed prototype smart pen comprises an ordinary ballpoint ink chamber, three force sensors, a six-channel inertial sensor, a microcomputer, and a plastic barrel structure. Handwritten data of the characters were recorded from six volunteers. After the data was properly trimmed and restructured, it was used to train four neural networks using deep-learning methods. These included Vision transformer (ViT), DNN (deep neural network), CNN (convolutional neural network), and LSTM (long short-term memory). The ViT network outperformed the others to achieve a validation accuracy of 99.05%. The trained model was further validated in real-time where it showed promising performance. These results will be used as a foundation to extend this investigation to include more characters and subjects.
Collapse
|
7
|
Dirgová Luptáková I, Kubovčík M, Pospíchal J. Wearable Sensor-Based Human Activity Recognition with Transformer Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:1911. [PMID: 35271058 PMCID: PMC8914677 DOI: 10.3390/s22051911] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 02/22/2022] [Accepted: 02/25/2022] [Indexed: 06/14/2023]
Abstract
Computing devices that can recognize various human activities or movements can be used to assist people in healthcare, sports, or human-robot interaction. Readily available data for this purpose can be obtained from the accelerometer and the gyroscope built into everyday smartphones. Effective classification of real-time activity data is, therefore, actively pursued using various machine learning methods. In this study, the transformer model, a deep learning neural network model developed primarily for the natural language processing and vision tasks, was adapted for a time-series analysis of motion signals. The self-attention mechanism inherent in the transformer, which expresses individual dependencies between signal values within a time series, can match the performance of state-of-the-art convolutional neural networks with long short-term memory. The performance of the proposed adapted transformer method was tested on the largest available public dataset of smartphone motion sensor data covering a wide range of activities, and obtained an average identification accuracy of 99.2% as compared with 89.67% achieved on the same data by a conventional machine learning method. The results suggest the expected future relevance of the transformer model for human activity recognition.
Collapse
|
8
|
Li Y, Wang L. Human Activity Recognition Based on Residual Network and BiLSTM. SENSORS 2022; 22:s22020635. [PMID: 35062604 PMCID: PMC8778132 DOI: 10.3390/s22020635] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 12/19/2021] [Accepted: 01/12/2022] [Indexed: 12/07/2022]
Abstract
Due to the wide application of human activity recognition (HAR) in sports and health, a large number of HAR models based on deep learning have been proposed. However, many existing models ignore the effective extraction of spatial and temporal features of human activity data. This paper proposes a deep learning model based on residual block and bi-directional LSTM (BiLSTM). The model first extracts spatial features of multidimensional signals of MEMS inertial sensors automatically using the residual block, and then obtains the forward and backward dependencies of feature sequence using BiLSTM. Finally, the obtained features are fed into the Softmax layer to complete the human activity recognition. The optimal parameters of the model are obtained by experiments. A homemade dataset containing six common human activities of sitting, standing, walking, running, going upstairs and going downstairs is developed. The proposed model is evaluated on our dataset and two public datasets, WISDM and PAMAP2. The experimental results show that the proposed model achieves the accuracy of 96.95%, 97.32% and 97.15% on our dataset, WISDM and PAMAP2, respectively. Compared with some existing models, the proposed model has better performance and fewer parameters.
Collapse
Affiliation(s)
- Yong Li
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China;
| | - Luping Wang
- School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou 510006, China
- Correspondence:
| |
Collapse
|
9
|
Recognition of Fine-Grained Walking Patterns Using a Smartwatch with Deep Attentive Neural Networks. SENSORS 2021; 21:s21196393. [PMID: 34640712 PMCID: PMC8511983 DOI: 10.3390/s21196393] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 09/16/2021] [Accepted: 09/16/2021] [Indexed: 11/16/2022]
Abstract
Generally, people do various things while walking. For example, people frequently walk while looking at their smartphones. Sometimes we walk differently than usual; for example, when walking on ice or snow, we tend to waddle. Understanding walking patterns could provide users with contextual information tailored to the current situation. To formulate this as a machine-learning problem, we defined 18 different everyday walking styles. Noting that walking strategies significantly affect the spatiotemporal features of hand motions, e.g., the speed and intensity of the swinging arm, we propose a smartwatch-based wearable system that can recognize these predefined walking styles. We developed a wearable system, suitable for use with a commercial smartwatch, that can capture hand motions in the form of multivariate timeseries (MTS) signals. Then, we employed a set of machine learning algorithms, including feature-based and recent deep learning algorithms, to learn the MTS data in a supervised fashion. Experimental results demonstrated that, with recent deep learning algorithms, the proposed approach successfully recognized a variety of walking patterns, using the smartwatch measurements. We analyzed the results with recent attention-based recurrent neural networks to understand the relative contributions of the MTS signals in the classification process.
Collapse
|