1
|
Zhou Z, Wang S, Zhang S, Pan X, Yang H, Zhuang Y, Lu Z. Deep learning-based spinal canal segmentation of computed tomography image for disease diagnosis: A proposed system for spinal stenosis diagnosis. Medicine (Baltimore) 2024; 103:e37943. [PMID: 38701305 PMCID: PMC11062721 DOI: 10.1097/md.0000000000037943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 03/29/2024] [Indexed: 05/05/2024] Open
Abstract
BACKGROUND Lumbar disc herniation was regarded as an age-related degenerative disease. Nevertheless, emerging reports highlight a discernible shift, illustrating the prevalence of these conditions among younger individuals. METHODS This study introduces a novel deep learning methodology tailored for spinal canal segmentation and disease diagnosis, emphasizing image processing techniques that delve into essential image attributes such as gray levels, texture, and statistical structures to refine segmentation accuracy. RESULTS Analysis reveals a progressive increase in the size of vertebrae and intervertebral discs from the cervical to lumbar regions. Vertebrae, bearing weight and safeguarding the spinal cord and nerves, are interconnected by intervertebral discs, resilient structures that counteract spinal pressure. Experimental findings demonstrate a lack of pronounced anteroposterior bending during flexion and extension, maintaining displacement and rotation angles consistently approximating zero. This consistency maintains uniform anterior and posterior vertebrae heights, coupled with parallel intervertebral disc heights, aligning with theoretical expectations. CONCLUSIONS Accuracy assessment employs 2 methods: IoU and Dice, and the average accuracy of IoU is 88% and that of Dice is 96.4%. The proposed deep learning-based system showcases promising results in spinal canal segmentation, laying a foundation for precise stenosis diagnosis in computed tomography images. This contributes significantly to advancements in spinal pathology understanding and treatment.
Collapse
Affiliation(s)
- Zhiyi Zhou
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Shenjun Wang
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Shujun Zhang
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Xiang Pan
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Haoxia Yang
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Yin Zhuang
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Zhengfeng Lu
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| |
Collapse
|
2
|
Liu H, Zhang H, Lee J, Xu P, Shin I, Park J. Motor Interaction Control Based on Muscle Force Model and Depth Reinforcement Strategy. Biomimetics (Basel) 2024; 9:150. [PMID: 38534835 DOI: 10.3390/biomimetics9030150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 02/08/2024] [Accepted: 02/20/2024] [Indexed: 03/28/2024] Open
Abstract
The current motion interaction model has the problems of insufficient motion fidelity and lack of self-adaptation to complex environments. To address this problem, this study proposed to construct a human motion control model based on the muscle force model and stage particle swarm, and based on this, this study utilized the deep deterministic gradient strategy algorithm to construct a motion interaction control model based on the muscle force model and the deep reinforcement strategy. Empirical analysis of the human motion control model proposed in this study revealed that the joint trajectory correlation and muscle activity correlation of the model were higher than those of other comparative models, and its joint trajectory correlation was up to 0.90, and its muscle activity correlation was up to 0.84. In addition, this study validated the effectiveness of the motion interaction control model using the depth reinforcement strategy and found that in the mixed-obstacle environment, the model's desired results were obtained by training 1.1 × 103 times, and the walking distance was 423 m, which was better than other models. In summary, the proposed motor interaction control model using the muscle force model and deep reinforcement strategy has higher motion fidelity and can realize autonomous decision making and adaptive control in the face of complex environments. It can provide a theoretical reference for improving the effect of motion control and realizing intelligent motion interaction.
Collapse
Affiliation(s)
- Hongyan Liu
- Department of Marine Convergence Design Engineering, Pukyong National University, 45, Yongso-ro, Nam-Gu, Busan 48513, Republic of Korea
| | - Hanwen Zhang
- Department of Marine Convergence Design Engineering, Pukyong National University, 45, Yongso-ro, Nam-Gu, Busan 48513, Republic of Korea
| | - Junghee Lee
- Department of Marine Convergence Design Engineering, Pukyong National University, 45, Yongso-ro, Nam-Gu, Busan 48513, Republic of Korea
| | - Peilong Xu
- Department of Artificial Intelligence Convergence, Pukyong National University, 45, Yongso-ro, Nam-Gu, Busan 48513, Republic of Korea
| | - Incheol Shin
- Department of Artificial Intelligence Convergence, Pukyong National University, 45, Yongso-ro, Nam-Gu, Busan 48513, Republic of Korea
| | - Jongchul Park
- Department of Marine Convergence Design Engineering, Pukyong National University, 45, Yongso-ro, Nam-Gu, Busan 48513, Republic of Korea
| |
Collapse
|
3
|
Chen Z, Min H, Wang D, Xia Z, Sun F, Fang B. A Review of Myoelectric Control for Prosthetic Hand Manipulation. Biomimetics (Basel) 2023; 8:328. [PMID: 37504216 PMCID: PMC10807628 DOI: 10.3390/biomimetics8030328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/14/2023] [Accepted: 07/19/2023] [Indexed: 07/29/2023] Open
Abstract
Myoelectric control for prosthetic hands is an important topic in the field of rehabilitation. Intuitive and intelligent myoelectric control can help amputees to regain upper limb function. However, current research efforts are primarily focused on developing rich myoelectric classifiers and biomimetic control methods, limiting prosthetic hand manipulation to simple grasping and releasing tasks, while rarely exploring complex daily tasks. In this article, we conduct a systematic review of recent achievements in two areas, namely, intention recognition research and control strategy research. Specifically, we focus on advanced methods for motion intention types, discrete motion classification, continuous motion estimation, unidirectional control, feedback control, and shared control. In addition, based on the above review, we analyze the challenges and opportunities for research directions of functionality-augmented prosthetic hands and user burden reduction, which can help overcome the limitations of current myoelectric control research and provide development prospects for future research.
Collapse
Affiliation(s)
- Ziming Chen
- Laboratory for Embedded System and Intelligent Robot, Wuhan University of Science and Technology, Wuhan 430081, China; (Z.C.); (H.M.)
| | - Huasong Min
- Laboratory for Embedded System and Intelligent Robot, Wuhan University of Science and Technology, Wuhan 430081, China; (Z.C.); (H.M.)
| | - Dong Wang
- Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Ziwei Xia
- School of Engineering and Technology, China University of Geosciences, Beijing 100083, China
| | - Fuchun Sun
- Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Bin Fang
- Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| |
Collapse
|
4
|
Kalaivani K, Kshirsagarr PR, Sirisha Devi J, Bandela SR, Colak I, Nageswara Rao J, Rajaram A. Prediction of biomedical signals using deep learning techniques. IFS 2023. [DOI: 10.3233/jifs-230399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
The electrocardiogram (ECG), electroencephalogram (EEG), and electromyogram (EMG) are all very useful diagnostic techniques. The widespread availability of mobile devices plus the declining cost of ECG, EEG, and EMG sensors provide a unique opportunity for making this kind of study widely available. The fundamental need for enhancing a country’s healthcare industry is the ability to foresee the plethora of ailments with which people are now being diagnosed. It’s no exaggeration to say that heart disease is one of the leading causes of mortality and disability in the world today. Diagnosing heart disease is a difficult process that calls for much training and expertise. Electrocardiogram (ECG) signal is an electrical signal produced by the human heart and used to detect the human heartbeat. Emotions are not simple phenomena, yet they do have a major impact on the standard of living. All of these mental processes including drive, perception, cognition, creativity, focus, attention, learning, and decision making are greatly influenced by emotional states. Electroencephalogram (EEG) signals react instantly and are more responsive to changes in emotional states than peripheral neurophysiological signals. As a result, EEG readings may disclose crucial aspects of a person’s emotional states. The signals generated by electromyography (EMG) are gaining prominence in both clinical and biological settings. Differentiating between neuromuscular illnesses requires a reliable method of detection, processing, and classification of EMG data. This study investigates potential deep learning applications by constructing a framework to improve the prediction of cardiac-related diseases using electrocardiogram (ECG) data, furnishing an algorithmic model for sentiment classification utilizing EEG data, and forecasting neuromuscular disease classification utilizing EMG signals.
Collapse
|
5
|
Khadivar F, Mendez V, Correia C, Batzianoulis I, Billard A, Micera S. EMG-driven shared human-robot compliant control for in-hand object manipulation in hand prostheses. J Neural Eng 2022; 19. [PMID: 36384035 DOI: 10.1088/1741-2552/aca35f] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 11/16/2022] [Indexed: 11/17/2022]
Abstract
Objective. The limited functionality of hand prostheses remains one of the main reasons behind the lack of its wide adoption by amputees. Indeed, while commercial prostheses can perform a reasonable number of grasps, they are often inadequate for manipulating the object once in hand. This lack of dexterity drastically restricts the utility of prosthetic hands. We aim at investigating a novel shared control strategy that combines autonomous control of forces exerted by a robotic hand with electromyographic (EMG) decoding to perform robust in-hand object manipulation.Approach. We conduct a three-day long longitudinal study with eight healthy subjects controlling a 16-degrees-of-freedom robotic hand to insert objects in boxes of various orientations. EMG decoding from forearm muscles enables subjects to move, proportionally and simultaneously, the fingers of the robotic hand. The desired object rotation is inferred using two EMG electrodes placed on the shoulder that record the activity of muscles responsible for elevation and depression. During the object interaction phase, the autonomous controller stabilizes and rotates the object to achieve the desired pose. In this study, we compare an incremental and a proportional shoulder-decoding method in combination with two state machine interfaces offering different levels of assistance.Main results. Results indicate that robotic assistance reduces the number of failures by41%and, when combined with an incremental shoulder EMG decoding, leads to faster task completion time (median = 16.9 s), compared to other control conditions. Training to use the assistive device is fast. After one session of practice, all subjects managed to achieve tasks with50%less failures.Significance. Shared control approaches that give some authority to an autonomous controller on-board the prosthesis are an alternative to control schemes relying on EMG decoding alone. This may improve the dexterity and versatility of robotic prosthetic hands for people with trans-radial amputation. By delegating control of forces to the prosthesis' on-board control, one speeds up reaction time and improves the precision of force control. Such a shared control mechanism may enable amputees to perform fine insertion tasks solely using their prosthetic hands. This may restore some of the functionality of the disabled arm.
Collapse
Affiliation(s)
- Farshad Khadivar
- LASA Laboratory, École Polytechnique Fédérale de Lausanne, 1015 Laussane, Switzerland
| | - Vincent Mendez
- Neuro X Institute, École Polytechnique Fédérale de Lausanne, 1202 Genève, Switzerland
| | - Carolina Correia
- Former member of LASA Laboratory, École Polytechnique Fédérale de Lausanne, 1015 Laussane, Switzerland
| | - Iason Batzianoulis
- Former member of LASA Laboratory, École Polytechnique Fédérale de Lausanne, 1015 Laussane, Switzerland
| | - Aude Billard
- LASA Laboratory, École Polytechnique Fédérale de Lausanne, 1015 Laussane, Switzerland
| | - Silvestro Micera
- Neuro X Institute, École Polytechnique Fédérale de Lausanne, 1202 Genève, Switzerland.,BioRobotics Institute and Department of Excellence in Robotics and AI, 56127 Pisa, Italy
| |
Collapse
|
6
|
Zhang S, Lu J, Huo W, Yu N, Han J. Estimation of knee joint movement using single-channel sEMG signals with a feature-guided convolutional neural network. Front Neurorobot 2022; 16:978014. [PMID: 36386394 PMCID: PMC9640579 DOI: 10.3389/fnbot.2022.978014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 09/28/2022] [Indexed: 11/24/2022] Open
Abstract
Estimating human motion intention, such as intent joint torque and movement, plays a crucial role in assistive robotics for ensuring efficient and safe human-robot interaction. For coupled human-robot systems, surface electromyography (sEMG) signal has been proven as an effective means for estimating human's intended movements. Usually, joint movement estimation uses sEMG signals measured from multiple muscles and needs many sEMG sensors placed on the human body, which may cause discomfort or result in mechanical/signal interference from wearable robots/environment during long-term routine use. Although the muscle synergy principle implies that it is possible to estimate human motion using sEMG signals from even one signal muscle, few studies investigated the feasibility of continuous motion estimation based on single-channel sEMG. In this study, a feature-guided convolutional neural network (FG-CNN) has been proposed to estimate human knee joint movement using single-channel sEMG. In the proposed FG-CNN, several handcrafted features have been fused into a CNN model to guide CNN feature extraction, and both handcrafted and CNN-extracted features were applied to a regression model, i.e., random forest regression, to estimate knee joint movements. Experiments with 8 healthy subjects were carried out, and sEMG signals measured from 6 muscles, i.e., vastus lateralis, vastus medialis, biceps femoris, semitendinosus, lateral or medial gastrocnemius (LG or MG), were separately evaluated for knee joint estimation using the proposed method. The experimental results demonstrated that the proposed FG-CNN method with single-channel sEMG signals from LG or MG can effectively estimate human knee joint movements. The average correlation coefficient between the measured and the estimated knee joint movements is 0.858 ± 0.085 for LG and 0.856 ± 0.057 for MG. Meanwhile, comparative studies showed that the combined handcrafted-CNN features outperform either the handcrafted features or the CNN features; the performance of the proposed signal-channel sEMG-based FG-CNN method is comparable to those of the traditional multi-channel sEMG-based methods. The outcomes of this study enable the possibility of developing a single-channel sEMG-based human-robot interface for knee joint movement estimation, which can facilitate the routine use of assistive robots.
Collapse
Affiliation(s)
- Song Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
- Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China
| | - Jiewei Lu
- College of Artificial Intelligence, Nankai University, Tianjin, China
- Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China
| | - Weiguang Huo
- College of Artificial Intelligence, Nankai University, Tianjin, China
- Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China
- *Correspondence: Ningbo Yu
| | - Ningbo Yu
- College of Artificial Intelligence, Nankai University, Tianjin, China
- Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China
- Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, China
- Weiguang Huo
| | - Jianda Han
- College of Artificial Intelligence, Nankai University, Tianjin, China
- Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China
- Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, China
| |
Collapse
|
7
|
Lomelin-Ibarra VA, Gutierrez-Rodriguez AE, Cantoral-Ceballos JA. Motor Imagery Analysis from Extensive EEG Data Representations Using Convolutional Neural Networks. Sensors (Basel) 2022; 22:s22166093. [PMID: 36015854 PMCID: PMC9414220 DOI: 10.3390/s22166093] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/06/2022] [Accepted: 08/12/2022] [Indexed: 05/28/2023]
Abstract
Motor imagery is a complex mental task that represents muscular movement without the execution of muscular action, involving cognitive processes of motor planning and sensorimotor proprioception of the body. Since the mental task has similar behavior to that of the motor execution process, it can be used to create rehabilitation routines for patients with some motor skill impairment. However, due to the nature of this mental task, its execution is complicated. Hence, the classification of these signals in scenarios such as brain-computer interface systems tends to have a poor performance. In this work, we study in depth different forms of data representation of motor imagery EEG signals for distinct CNN-based models as well as novel EEG data representations including spectrograms and multidimensional raw data. With the aid of transfer learning, we achieve results up to 93% accuracy, exceeding the current state of the art. However, although these results are strong, they entail the use of high computational resources to generate the samples, since they are based on spectrograms. Thus, we searched further for alternative forms of EEG representations, based on 1D, 2D, and 3D variations of the raw data, leading to promising results for motor imagery classification that still exceed the state of the art. Hence, in this work, we focus on exploring alternative methods to process and improve the classification of motor imagery features with few preprocessing techniques.
Collapse
|
8
|
Mendez SP, Gherardini M, Santos GVDP, Munoz DM, Ayala HVH, Cipriani C. Data-Driven Real-Time Magnetic Tracking Applied to Myokinetic Interfaces. IEEE Trans Biomed Circuits Syst 2022; 16:266-274. [PMID: 35316192 DOI: 10.1109/tbcas.2022.3161133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
A new concept of human-machine interface to control hand prostheses based on displacements of multiple magnets implanted in the limb residual muscles, the myokinetic control interface, has been recently proposed. In previous works, magnets localization has been achieved following an optimization procedure to find an approximate solution to an analytical model. To simplify and speed up the localization problem, here we employ machine learning models, namely linear and radial basis functions artificial neural networks, which can translate measured magnetic information to desired commands for active prosthetic devices. They were developed offline and then implemented on field-programmable gate arrays using customized floating-point operators. We optimized computational precision, execution time, hardware, and energy consumption, as they are essential features in the context of wearable devices. When used to track a single magnet in a mockup of the human forearm, the proposed data-driven strategy achieved a tracking accuracy of 720 μm 95% of the time and latency of 12.07 μs. The proposed system architecture is expected to be more power-efficient compared to previous solutions. The outcomes of this work encourage further research on improving the devised methods to deal with multiple magnets simultaneously.
Collapse
|
9
|
Luu DK, Nguyen AT, Jiang M, Drealan MW, Xu J, Wu T, Tam WK, Zhao W, Lim BZH, Overstreet CK, Zhao Q, Cheng J, Keefer EW, Yang Z. Artificial Intelligence Enables Real-Time and Intuitive Control of Prostheses via Nerve Interface. IEEE Trans Biomed Eng 2022; 69:3051-3063. [PMID: 35302937 DOI: 10.1109/tbme.2022.3160618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE The next generation prosthetic hand that moves and feels like a real hand requires a robust neural interconnection between the human minds and machines. METHODS Here we present a neuroprosthetic system to demonstrate that principle by employing an artificial intelligence (AI) agent to translate the amputees movement intent through a peripheral nerve interface. The AI agent is designed based on the recurrent neural network (RNN) and could simultaneously decode six degree-of-freedom (DOF) from multichannel nerve data in real-time. The decoder's performance is characterized in motor decoding experiments with three human amputees. RESULTS First, we show the AI agent enables amputees to intuitively control a prosthetic hand with individual finger and wrist movements up to 97-98% accuracy. Second, we demonstrate the AI agent's real-time performance by measuring the reaction time and information throughput in a hand gesture matching task. Third, we investigate the AI agent's long-term uses and show the decoder's robust predictive performance over a 16-month implant duration. Conclusion & significance: Our study demonstrates the potential of AI-enabled nerve technology, underling the next generation of dexterous and intuitive prosthetic hands.
Collapse
|
10
|
Thomson CJ, Clark GA, George JA. A Recurrent Neural Network Provides Stable Across-Day Prosthetic Control for a Human Amputee with Implanted Intramuscular Electromyographic Recording Leads. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:6171-6174. [PMID: 34892525 DOI: 10.1109/embc46164.2021.9629580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Upper-limb prosthetic control is often challenging and non-intuitive, leading to up to 50% of prostheses users abandoning their prostheses. Convolutional neural networks (CNN) and recurrent long short-term memory (LSTM) networks have shown promise in extracting high-degree-of-freedom motor intent from myoelectric signals, thereby providing more intuitive and dexterous prosthetic control. An important next consideration for these algorithms is if performance remains stable over multiple days. Here we introduce a new LSTM network and compare its performance to previously established state-of-the-art algorithms-a CNN and a modified Kalman filter (MKF)-in offline analyses using 76 days of intramuscular recordings from one amputee participant collected over 425 calendar days. Specifically, we assessed the robustness of each algorithm over time by training on data from the first (one, five, ten, 30, or 60) days and then testing on myoelectric signals on the last 16 days. Results indicate that training on additional datasets from prior days generally decreases the Root Mean Squared Error (RMSE) of intended and unintended movements for all algorithms. Across all algorithms trained with 60 days of data, the lowest RMSE for unintended movements was achieved with the LSTM. The LSTM also showed less across-day variance in RMSE of unintended movements relative to the other algorithms. Altogether this work suggests that the LSTM algorithm introduced here can provide more intuitive and dexterous control for prosthetic users, and that training on multiple days of data improves overall performance on subsequent days, at least for offline analyses.
Collapse
|
11
|
Nguyen AT, Drealan MW, Khue Luu D, Jiang M, Xu J, Cheng J, Zhao Q, Keefer EW, Yang Z. A portable, self-contained neuroprosthetic hand with deep learning-based finger control. J Neural Eng 2021; 18. [PMID: 34571503 DOI: 10.1088/1741-2552/ac2a8d] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 09/27/2021] [Indexed: 01/07/2023]
Abstract
Objective.Deep learning-based neural decoders have emerged as the prominent approach to enable dexterous and intuitive control of neuroprosthetic hands. Yet few studies have materialized the use of deep learning in clinical settings due to its high computational requirements.Approach.Recent advancements of edge computing devices bring the potential to alleviate this problem. Here we present the implementation of a neuroprosthetic hand with embedded deep learning-based control. The neural decoder is designed based on the recurrent neural network architecture and deployed on the NVIDIA Jetson Nano-a compacted yet powerful edge computing platform for deep learning inference. This enables the implementation of the neuroprosthetic hand as a portable and self-contained unit with real-time control of individual finger movements.Main results.A pilot study with a transradial amputee is conducted to evaluate the proposed system using peripheral nerve signals acquired from implanted intrafascicular microelectrodes. The preliminary experiment results show the system's capabilities of providing robust, high-accuracy (95%-99%) and low-latency (50-120 ms) control of individual finger movements in various laboratory and real-world environments.Conclusion.This work is a technological demonstration of modern edge computing platforms to enable the effective use of deep learning-based neural decoders for neuroprosthesis control as an autonomous system.Significance.The proposed system helps pioneer the deployment of deep neural networks in clinical applications underlying a new class of wearable biomedical devices with embedded artificial intelligence.Clinical trial registration: DExterous Hand Control Through Fascicular Targeting (DEFT). Identifier: NCT02994160.
Collapse
Affiliation(s)
- Anh Tuan Nguyen
- Biomedical Engineering, University of Minnesota, Minneapolis, MN, United States of America.,Fasikl Incorporated, Minneapolis, MN, United States of America
| | - Markus W Drealan
- Biomedical Engineering, University of Minnesota, Minneapolis, MN, United States of America
| | - Diu Khue Luu
- Biomedical Engineering, University of Minnesota, Minneapolis, MN, United States of America
| | - Ming Jiang
- Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States of America
| | - Jian Xu
- Biomedical Engineering, University of Minnesota, Minneapolis, MN, United States of America
| | - Jonathan Cheng
- Plastic Surgery, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Qi Zhao
- Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States of America
| | - Edward W Keefer
- Nerves Incorporated, Dallas, TX, United States of America.,Fasikl Incorporated, Minneapolis, MN, United States of America
| | - Zhi Yang
- Biomedical Engineering, University of Minnesota, Minneapolis, MN, United States of America.,Fasikl Incorporated, Minneapolis, MN, United States of America
| |
Collapse
|
12
|
Smirnov Y, Smirnov D, Popov A, Yakovenko S. Solving musculoskeletal biomechanics with machine learning. PeerJ Comput Sci 2021; 7:e663. [PMID: 34541309 PMCID: PMC8409332 DOI: 10.7717/peerj-cs.663] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 07/16/2021] [Indexed: 06/13/2023]
Abstract
Deep learning is a relatively new computational technique for the description of the musculoskeletal dynamics. The experimental relationships of muscle geometry in different postures are the high-dimensional spatial transformations that can be approximated by relatively simple functions, which opens the opportunity for machine learning (ML) applications. In this study, we challenged general ML algorithms with the problem of approximating the posture-dependent moment arm and muscle length relationships of the human arm and hand muscles. We used two types of algorithms, light gradient boosting machine (LGB) and fully connected artificial neural network (ANN) solving the wrapping kinematics of 33 muscles spanning up to six degrees of freedom (DOF) each for the arm and hand model with 18 DOFs. The input-output training and testing datasets, where joint angles were the input and the muscle length and moment arms were the output, were generated by our previous phenomenological model based on the autogenerated polynomial structures. Both models achieved a similar level of errors: ANN model errors were 0.08 ± 0.05% for muscle lengths and 0.53 ± 0.29% for moment arms, and LGB model made similar errors-0.18 ± 0.06% and 0.13 ± 0.07%, respectively. LGB model reached the training goal with only 103 samples, while ANN required 106 samples; however, LGB models were about 39 times slower than ANN models in the evaluation. The sufficient performance of developed models demonstrates the future applicability of ML for musculoskeletal transformations in a variety of applications, such as in advanced powered prosthetics.
Collapse
Affiliation(s)
- Yaroslav Smirnov
- Department of Electronic Engineering, Igor Sikorsky Kyiv Polytechnic Institute, Kyiv, Ukraine
| | - Denys Smirnov
- Department of Computer-aided Management and Data Processing Systems, Igor Sikorsky Kyiv Polytechnic Institute, Kyiv, Ukraine
| | - Anton Popov
- Department of Electronic Engineering, Igor Sikorsky Kyiv Polytechnic Institute, Kyiv, Ukraine
- Data & Analytics, Ciklum, Kyiv, Ukraine
| | - Sergiy Yakovenko
- Department of Human Performance—Exercise Physiology, School of Medicine, West Virginia University, Morgantown, West Virginia, United States
- Department of Biomedical Engineering, Benjamin M. Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, West Virginia, United States
- Rockefeller Neuroscience Institute, School of Medicine, West Virginia University, Morgantown, West Virginia, United States
- Mechanical and Aerospace Engineering, Benjamin M. Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, West Virginia, United States
- Department of Neuroscience, School of Medicine, West Virginia University, Morgantown, West Virginia, United States
| |
Collapse
|
13
|
Nasr A, Bell S, He J, Whittaker RL, Jiang N, Dickerson CR, McPhee J. MuscleNET: mapping electromyography to kinematic and dynamic biomechanical variables by machine learning. J Neural Eng 2021; 18. [PMID: 34352741 DOI: 10.1088/1741-2552/ac1adc] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 08/05/2021] [Indexed: 02/02/2023]
Abstract
Objective.This paper proposes machine learning models for mapping surface electromyography (sEMG) signals to regression of joint angle, joint velocity, joint acceleration, joint torque, and activation torque.Approach.The regression models, collectively known as MuscleNET, take one of four forms: ANN (forward artificial neural network), RNN (recurrent neural network), CNN (convolutional neural network), and RCNN (recurrent convolutional neural network). Inspired by conventional biomechanical muscle models, delayed kinematic signals were used along with sEMG signals as the machine learning model's input; specifically, the CNN and RCNN were modeled with novel configurations for these input conditions. The models' inputs contain either raw or filtered sEMG signals, which allowed evaluation of the filtering capabilities of the models. The models were trained using human experimental data and evaluated with different individual data.Main results.Results were compared in terms of regression error (using the root-mean-square) and model computation delay. The results indicate that the RNN (with filtered sEMG signals) and RCNN (with raw sEMG signals) models, both with delayed kinematic data, can extract underlying motor control information (such as joint activation torque or joint angle) from sEMG signals in pick-and-place tasks. The CNNs and RCNNs were able to filter raw sEMG signals.Significance.All forms of MuscleNET were found to map sEMG signals within 2 ms, fast enough for real-time applications such as the control of exoskeletons or active prostheses. The RNN model with filtered sEMG and delayed kinematic signals is particularly appropriate for applications in musculoskeletal simulation and biomechatronic device control.
Collapse
Affiliation(s)
- Ali Nasr
- University of Waterloo, Ontario N2L 1W2, Canada
| | - Sydney Bell
- University of Waterloo, Ontario N2L 1W2, Canada
| | - Jiayuan He
- University of Waterloo, Ontario N2L 1W2, Canada
| | | | - Ning Jiang
- University of Waterloo, Ontario N2L 1W2, Canada
| | | | - John McPhee
- University of Waterloo, Ontario N2L 1W2, Canada
| |
Collapse
|
14
|
Abstract
Brain-machine interfaces (BMI) are being developed to restore upper limb function for persons with spinal cord injury or other motor degenerative conditions. BMI and implantable sensors for myoelectric prostheses directly extract information from the central or peripheral nervous system to provide users with high fidelity control of their prosthetic device. Control algorithms have been highly transferable between the 2 technologies but also face common issues. In this review of the current state of the art in each field, the authors point out similarities and differences between the 2 technologies that may guide the implementation of common solutions to these challenges.
Collapse
Affiliation(s)
- Alex K Vaskov
- Robotics Institute, University of Michigan, 2505 Hayward St, Ann Arbor, MI 48109, USA
| | - Cynthia A Chestek
- Robotics Institute, University of Michigan, 2505 Hayward St, Ann Arbor, MI 48109, USA; Department of Biomedical Engineering, University of Michigan, 2200 Bonisteel Blvd, Ann Arbor, MI 48109, USA; Department of Electrical Engineering and Computer Science, University of Michigan, 1301 Beal Ave, Ann Arbor, MI 48109, USA; Neuroscience Graduate Program, University of Michigan, 204 Washtenaw Ave, Ann Arbor, MI 48109, USA.
| |
Collapse
|
15
|
Abstract
The overarching goal was to resolve a major barrier to real-life prosthesis usability-the rapid degradation of prosthesis control systems, which require frequent recalibrations. Specifically, we sought to develop and test a motor decoder that provides (1) highly accurate, real-time movement response, and (2) unprecedented adaptability to dynamic changes in the amputee's biological state, thereby supporting long-term integrity of control performance with few recalibrations. To achieve that, an adaptive motor decoder was designed to auto-switch between algorithms in real-time. The decoder detects the initial aggregate motoneuron spiking activity from the motor pool, then engages the optimal parameter settings for decoding the motoneuron spiking activity in that particular state. "Clear-box" testing of decoder performance under varied physiological conditions and post-amputation complications was conducted by comparing the movement output of a simulated prosthetic hand as driven by the decoded signal vs. as driven by the actual signal. Pearson's correlation coefficient and Normalized Root Mean Square Error were used to quantify the accuracy of the decoder's output. Our results show that the decoder algorithm extracted the features of the intended movement and drove the simulated prosthetic hand accurately with real-time performance (<10 ms) (Pearson's correlation coefficient >0.98 to >0.99 and Normalized Root Mean Square Error <13-5%). Further, the decoder robustly decoded the spiking activity of multi-speed inputs, inputs generated from reversed motoneuron recruitment, and inputs reflecting substantial biological heterogeneity of motoneuron properties, also in real-time. As the amputee's neuromodulatory state changes throughout the day and the electrical properties and ratio of slower vs. faster motoneurons shift over time post-amputation, the motor decoder presented here adapts to such changes in real-time and is thus expected to greatly enhance and extend the usability of prostheses.
Collapse
Affiliation(s)
- Andrew E Montgomery
- Department of Biomedical, Industrial and Human Factors Engineering, College of Engineering and Computer Science, Wright State University, Dayton, OH, United States
| | - John M Allen
- Department of Neuroscience, Cell Biology and Physiology, Boonshoft School of Medicine and College of Science and Mathematics, Wright State University, Dayton, OH, United States
| | - Sherif M Elbasiouny
- Department of Biomedical, Industrial and Human Factors Engineering, College of Engineering and Computer Science, Wright State University, Dayton, OH, United States.,Department of Neuroscience, Cell Biology and Physiology, Boonshoft School of Medicine and College of Science and Mathematics, Wright State University, Dayton, OH, United States
| |
Collapse
|
16
|
Paskett MD, Brinton MR, Hansen TC, George JA, Davis TS, Duncan CC, Clark GA. Activities of daily living with bionic arm improved by combination training and latching filter in prosthesis control comparison. J Neuroeng Rehabil 2021; 18:45. [PMID: 33632237 DOI: 10.1186/s12984-021-00839-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 02/14/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Advanced prostheses can restore function and improve quality of life for individuals with amputations. Unfortunately, most commercial control strategies do not fully utilize the rich control information from residual nerves and musculature. Continuous decoders can provide more intuitive prosthesis control using multi-channel neural or electromyographic recordings. Three components influence continuous decoder performance: the data used to train the algorithm, the algorithm, and smoothing filters on the algorithm's output. Individual groups often focus on a single decoder, so very few studies compare different decoders using otherwise similar experimental conditions. METHODS We completed a two-phase, head-to-head comparison of 12 continuous decoders using activities of daily living. In phase one, we compared two training types and a smoothing filter with three algorithms (modified Kalman filter, multi-layer perceptron, and convolutional neural network) in a clothespin relocation task. We compared training types that included only individual digit and wrist movements vs. combination movements (e.g., simultaneous grasp and wrist flexion). We also compared raw vs. nonlinearly smoothed algorithm outputs. In phase two, we compared the three algorithms in fragile egg, zipping, pouring, and folding tasks using the combination training and smoothing found beneficial in phase one. In both phases, we collected objective, performance-based (e.g., success rate), and subjective, user-focused (e.g., preference) measures. RESULTS Phase one showed that combination training improved prosthesis control accuracy and speed, and that the nonlinear smoothing improved accuracy but generally reduced speed. Phase one importantly showed simultaneous movements were used in the task, and that the modified Kalman filter and multi-layer perceptron predicted more simultaneous movements than the convolutional neural network. In phase two, user-focused metrics favored the convolutional neural network and modified Kalman filter, whereas performance-based metrics were generally similar among all algorithms. CONCLUSIONS These results confirm that state-of-the-art algorithms, whether linear or nonlinear in nature, functionally benefit from training on more complex data and from output smoothing. These studies will be used to select a decoder for a long-term take-home trial with implanted neuromyoelectric devices. Overall, clinical considerations may favor the mKF as it is similar in performance, faster to train, and computationally less expensive than neural networks.
Collapse
|
17
|
Abstract
SIGNIFICANCE A number of movement intent decoders exist in the literature that typically differ in the algorithms used and the nature of the outputs generated. Each approach comes with its own advantages and disadvantages. Combining the estimates of multiple algorithms may have better performance than any of the individual methods. OBJECTIVE This paper presents and evaluates a shared controller framework for prosthetic limbs based on multiple decoders of volitional movement intent. METHODS An algorithm to combine multiple estimates to control the prosthesis is developed in this paper. The capabilities of the approach are validated using a system that combines a Kalman filter-based decoder with a multilayer perceptron classifier-based decoder. The shared controller's performance is validated in online experiments where a virtual limb is controlled in real-time by amputee and intact-arm subjects. During the testing phase subjects controlled a virtual hand in real time to move digits to instructed positions using either a Kalman filter decoder, a multilayer perceptron decoder, or a linear combination of the two. RESULTS The shared controller results in statistically significant improvements over the component decoders. Specifically, certain degrees of shared control result in increases in the time-in-target metric and decreases in unintended movements. CONCLUSION The shared controller of this paper combines the good qualities of component decoders tested in this paper. Herein, combining a Kalman filter decoder with a classifier-based decoder inherits the flexibility of the Kalman filter decoder and the limited unwanted movements from the classifier-based decoder, resulting in a system that may be able to perform the tasks of everyday life more naturally and reliably.
Collapse
|
18
|
George JA, Tully TN, Colgan PC, Clark GA. Bilaterally Mirrored Movements Improve the Accuracy and Precision of Training Data for Supervised Learning of Neural or Myoelectric Prosthetic Control. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:3297-3301. [PMID: 33018709 DOI: 10.1109/embc44109.2020.9175388] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Intuitive control of prostheses relies on training algorithms to correlate biological recordings to motor intent. The quality of the training dataset is critical to run-time performance, but it is difficult to label hand kinematics accurately after the hand has been amputated. We quantified the accuracy and precision of labeling hand kinematics for two different training approaches: 1) assuming a participant is perfectly mimicking predetermined motions of a prosthesis (mimicked training), and 2) assuming a participant is perfectly mirroring their contralateral hand during identical bilateral movements (mirrored training). We compared these approaches in non-amputee individuals, using an infrared camera to track eight different joint angles of the hands in real-time. Aggregate data revealed that mimicked training does not account for biomechanical coupling or temporal changes in hand posture. Mirrored training was significantly more accurate and precise at labeling hand kinematics. However, when training a modified Kalman filter to estimate motor intent, the mimicked and mirrored training approaches were not significantly different. The results suggest that the mirrored training approach creates a more faithful but more complex dataset. Advanced algorithms, more capable of learning the complex mirrored training dataset, may yield better run-time prosthetic control.
Collapse
|
19
|
Rim B, Sung NJ, Min S, Hong M. Deep Learning in Physiological Signal Data: A Survey. Sensors (Basel) 2020; 20:E969. [PMID: 32054042 PMCID: PMC7071412 DOI: 10.3390/s20040969] [Citation(s) in RCA: 66] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 01/31/2020] [Accepted: 02/09/2020] [Indexed: 12/11/2022]
Abstract
Deep Learning (DL), a successful promising approach for discriminative and generative tasks, has recently proved its high potential in 2D medical imaging analysis; however, physiological data in the form of 1D signals have yet to be beneficially exploited from this novel approach to fulfil the desired medical tasks. Therefore, in this paper we survey the latest scientific research on deep learning in physiological signal data such as electromyogram (EMG), electrocardiogram (ECG), electroencephalogram (EEG), and electrooculogram (EOG). We found 147 papers published between January 2018 and October 2019 inclusive from various journals and publishers. The objective of this paper is to conduct a detailed study to comprehend, categorize, and compare the key parameters of the deep-learning approaches that have been used in physiological signal analysis for various medical applications. The key parameters of deep-learning approach that we review are the input data type, deep-learning task, deep-learning model, training architecture, and dataset sources. Those are the main key parameters that affect system performance. We taxonomize the research works using deep-learning method in physiological signal analysis based on: (1) physiological signal data perspective, such as data modality and medical application; and (2) deep-learning concept perspective such as training architecture and dataset sources.
Collapse
Affiliation(s)
- Beanbonyka Rim
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Nak-Jun Sung
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Sedong Min
- Department of Medical IT Engineering, Soonchunhyang University, Asan 31538, Korea
| | - Min Hong
- Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Korea
| |
Collapse
|
20
|
Wolf EJ, Cruz TH, Emondi AA, Langhals NB, Naufel S, Peng GCY, Schulz BW, Wolfson M. Advanced technologies for intuitive control and sensation of prosthetics. Biomed Eng Lett 2020; 10:119-128. [PMID: 32175133 PMCID: PMC7046895 DOI: 10.1007/s13534-019-00127-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Accepted: 07/31/2019] [Indexed: 02/06/2023] Open
Abstract
The Department of Defense, Department of Veterans Affairs and National Institutes of Health have invested significantly in advancing prosthetic technologies over the past 25 years, with the overall intent to improve the function, participation and quality of life of Service Members, Veterans, and all United States Citizens living with limb loss. These investments have contributed to substantial advancements in the control and sensory perception of prosthetic devices over the past decade. While control of motorized prosthetic devices through the use of electromyography has been widely available since the 1980s, this technology is not intuitive. Additionally, these systems do not provide stimulation for sensory perception. Recent research has made significant advancement not only in the intuitive use of electromyography for control but also in the ability to provide relevant meaningful perceptions through various stimulation approaches. While much of this previous work has traditionally focused on those with upper extremity amputation, new developments include advanced bidirectional neuroprostheses that are applicable to both the upper and lower limb amputation. The goal of this review is to examine the state-of-the-science in the areas of intuitive control and sensation of prosthetic devices and to discuss areas of exploration for the future. Current research and development efforts in external systems, implanted systems, surgical approaches, and regenerative approaches will be explored.
Collapse
Affiliation(s)
- Erik J. Wolf
- Clinical and Rehabilitative Medicine Research Program, US Army Medical Research and Development Command, Fort Detrick, MD 21702 USA
| | - Theresa H. Cruz
- National Institute of Child Health and Human Development, National Institute of Health, Bethesda, MD 20817 USA
| | - Alfred A. Emondi
- Defense Advanced Research Projects Agency, Arlington, VA 22203 USA
| | - Nicholas B. Langhals
- National Institute of Neurological Disorders and Stroke, National Institute of Health, Bethesda, MD 20892 USA
| | | | - Grace C. Y. Peng
- National Institute of Biomedical Imaging and Bioengineering, National Institute of Health, Bethesda, MD 20817 USA
| | - Brian W. Schulz
- VA Office of Research and Development, Washington, DC 20002 USA
| | - Michael Wolfson
- National Institute of Biomedical Imaging and Bioengineering, National Institute of Health, Bethesda, MD 20817 USA
| |
Collapse
|
21
|
Zhang D, Yao L, Chen K, Wang S, Haghighi PD, Sullivan C. A Graph-Based Hierarchical Attention Model for Movement Intention Detection from EEG Signals. IEEE Trans Neural Syst Rehabil Eng 2019; 27:2247-2253. [PMID: 31562095 DOI: 10.1109/tnsre.2019.2943362] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
An EEG-based Brain-Computer Interface (BCI) is a system that enables a user to communicate with and intuitively control external devices solely using the user's intentions. Current EEG-based BCI research usually involves a subject-specific adaptation step before a BCI system is ready to be employed by a new user. However, the subject-independent scenario, in which a well-trained model can be directly applied to new users without pre-calibration, is particularly desirable yet rarely explored. Considering this critical gap, our focus in this paper is the subject-independent scenario of EEG-based human intention recognition. We present a G raph-based H ierarchical A ttention M odel (G-HAM) that utilizes the graph structure to represent the spatial information of EEG sensors and the hierarchical attention mechanism to focus on both the most discriminative temporal periods and EEG nodes. Extensive experiments on a large EEG dataset containing 105 subjects indicate that our model is capable of exploiting the underlying invariant EEG patterns across different subjects and generalizing the patterns to new subjects with better performance than a series of state-of-the-art and baseline approaches.
Collapse
|