1
|
Zhang J, Mao H, Chang D, Yu H, Wu W, Shen D. Adaptive and Iterative Learning With Multi-Perspective Regularizations for Metal Artifact Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3354-3365. [PMID: 38687653 DOI: 10.1109/tmi.2024.3395348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Metal artifact reduction (MAR) is important for clinical diagnosis with CT images. The existing state-of-the-art deep learning methods usually suppress metal artifacts in sinogram or image domains or both. However, their performance is limited by the inherent characteristics of the two domains, i.e., the errors introduced by local manipulations in the sinogram domain would propagate throughout the whole image during backprojection and lead to serious secondary artifacts, while it is difficult to distinguish artifacts from actual image features in the image domain. To alleviate these limitations, this study analyzes the desirable properties of wavelet transform in-depth and proposes to perform MAR in the wavelet domain. First, wavelet transform yields components that possess spatial correspondence with the image, thereby preventing the spread of local errors to avoid secondary artifacts. Second, using wavelet transform could facilitate identification of artifacts from image since metal artifacts are mainly high-frequency signals. Taking these advantages of the wavelet transform, this paper decomposes an image into multiple wavelet components and introduces multi-perspective regularizations into the proposed MAR model. To improve the transparency and validity of the model, all the modules in the proposed MAR model are designed to reflect their mathematical meanings. In addition, an adaptive wavelet module is also utilized to enhance the flexibility of the model. To optimize the model, an iterative algorithm is developed. The evaluation on both synthetic and real clinical datasets consistently confirms the superior performance of the proposed method over the competing methods.
Collapse
|
2
|
Pratap S, Narayan J, Hatta Y, Ito K, Hazarika SM. Glove-Net: Enhancing Grasp Classification with Multisensory Data and Deep Learning Approach. SENSORS (BASEL, SWITZERLAND) 2024; 24:4378. [PMID: 39001157 PMCID: PMC11244365 DOI: 10.3390/s24134378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 06/21/2024] [Accepted: 07/03/2024] [Indexed: 07/16/2024]
Abstract
Grasp classification is pivotal for understanding human interactions with objects, with wide-ranging applications in robotics, prosthetics, and rehabilitation. This study introduces a novel methodology utilizing a multisensory data glove to capture intricate grasp dynamics, including finger posture bending angles and fingertip forces. Our dataset comprises data collected from 10 participants engaging in grasp trials with 24 objects using the YCB object set. We evaluate classification performance under three scenarios: utilizing grasp posture alone, utilizing grasp force alone, and combining both modalities. We propose Glove-Net, a hybrid CNN-BiLSTM architecture for classifying grasp patterns within our dataset, aiming to harness the unique advantages offered by both CNNs and BiLSTM networks. This model seamlessly integrates CNNs' spatial feature extraction capabilities with the temporal sequence learning strengths inherent in BiLSTM networks, effectively addressing the intricate dependencies present within our grasping data. Our study includes findings from an extensive ablation study aimed at optimizing model configurations and hyperparameters. We quantify and compare the classification accuracy across these scenarios: CNN achieved 88.09%, 69.38%, and 93.51% testing accuracies for posture-only, force-only, and combined data, respectively. LSTM exhibited accuracies of 86.02%, 70.52%, and 92.19% for the same scenarios. Notably, the hybrid CNN-BiLSTM proposed model demonstrated superior performance with accuracies of 90.83%, 73.12%, and 98.75% across the respective scenarios. Through rigorous numerical experimentation, our results underscore the significance of multimodal grasp classification and highlight the efficacy of the proposed hybrid Glove-Net architectures in leveraging multisensory data for precise grasp recognition. These insights advance understanding of human-machine interaction and hold promise for diverse real-world applications.
Collapse
Affiliation(s)
- Subhash Pratap
- Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati 781039, India
- Department of Mechanical Engineering, Gifu University, Gifu 501-1193, Japan
| | - Jyotindra Narayan
- Department of Computing, Imperial College London, London SW7 2RH, UK
- Chair of Digital Health, Universität Bayreuth, 95445 Bayreuth, Germany
| | - Yoshiyuki Hatta
- Department of Mechanical Engineering, Gifu University, Gifu 501-1193, Japan
| | - Kazuaki Ito
- Department of Mechanical Engineering, Gifu University, Gifu 501-1193, Japan
| | - Shyamanta M Hazarika
- Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati 781039, India
| |
Collapse
|
3
|
Jung S, Zhou M, Ma J, Yang R, Cramer SC, Dobkin BH, Yang LF, Rosen J. Wearable Body Sensors Integrated into a Virtual Reality Environment - A Modality for Automating the Rehabilitation of the Motor Control System. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039352 DOI: 10.1109/embc53108.2024.10782207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Amidst the rising incidences of stroke and spinal cord injuries, this study introduces a Virtual Reality (VR) system integrated with wearable sensor-based motion capture technology to enhance rehabilitation and assessment of upper limb impairments. The proposed motion capture system utilizes Inertial Measurement Units (IMUs) engineered in a modular and portable fashion to best fit different rehabilitation needs of the motor control system. The wearable sensor system consists of 15 modules capable of capturing the entire human body motion, along with a pair of hand gloves including 11 miniature sensors measuring hand motion. The sensor accuracy test demonstrates a Root Mean Square Error below 1.78 degrees compared to measurements collected by a commercial robotic arm. Incorporating detailed data of human joints' motion collected by the array of wearable sensors, the proposed VR system provides additional tools for assessments of the motor control system function, including range of motion, reachable workspace, motor learning, along with rehabilitative intervention games. The developed system provides a foundation for an AI-driven autonomous rehabilitation system, integrating automated clinical assessments with quantitative tools and adaptive difficulty algorithms for personalized interventions, applicable in both clinical-based and home-based settings.
Collapse
|
4
|
Bai J, Li G, Lu X, Wen X. Automatic rehabilitation assessment method of upper limb motor function based on posture and distribution force. Front Neurosci 2024; 18:1362495. [PMID: 38440394 PMCID: PMC10909926 DOI: 10.3389/fnins.2024.1362495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 01/30/2024] [Indexed: 03/06/2024] Open
Abstract
The clinical rehabilitation assessment methods for hemiplegic upper limb motor function are often subjective, time-consuming, and non-uniform. This study proposes an automatic rehabilitation assessment method for upper limb motor function based on posture and distributed force measurements. Azure Kinect combined with MediaPipe was used to detect upper limb and hand movements, and the array distributed flexible thin film pressure sensor was employed to measure the distributed force of hand. This allowed for the automated measurement of 30 items within the Fugl-Meyer scale. Feature information was extracted separately from the affected and healthy sides, the feature ratios or deviation were then fed into a single/multiple fuzzy logic assessment model to determine the assessment score of each item. Finally, the total score of the hemiplegic upper limb motor function assessment was derived. Experiments were performed to evaluate the motor function of the subjects' upper extremities. Bland-Altman plots of physician and system scores showed good agreement. The results of the automated assessment system were highly correlated with the clinical Fugl-Meyer total score (r = 0.99, p < 0.001). The experimental results state that this system can automatically assess the motor function of the affected upper limb by measuring the posture and force distribution.
Collapse
Affiliation(s)
- Jing Bai
- Industrial Technology Research Institute of Intelligent Equipment, Nanjing Institute of Technology, Nanjing, China
- Jiangsu Provincial Engineering Laboratory of Intelligent Manufacturing Equipment, Nanjing, China
| | - Guocheng Li
- Automation Department, Nanjing Institute of Technology, Nanjing, China
| | - Xuanming Lu
- Industrial Technology Research Institute of Intelligent Equipment, Nanjing Institute of Technology, Nanjing, China
- Jiangsu Provincial Engineering Laboratory of Intelligent Manufacturing Equipment, Nanjing, China
| | - Xiulan Wen
- Automation Department, Nanjing Institute of Technology, Nanjing, China
| |
Collapse
|
5
|
Chen J, Wang C, Chen J, Yin B. Manipulator Control System Based on Flexible Sensor Technology. MICROMACHINES 2023; 14:1697. [PMID: 37763860 PMCID: PMC10535772 DOI: 10.3390/mi14091697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/12/2023] [Accepted: 08/28/2023] [Indexed: 09/29/2023]
Abstract
The research on the remote control of manipulators based on flexible sensor technology is gradually extensive. In order to achieve stable, accurate, and efficient control of the manipulator, it is necessary to reasonably design the structure of the sensor with excellent tensile strength and flexibility. The acquisition of manual information by high-performance sensors is the basis of manipulator control. This paper starts with the manufacturing of materials of the flexible sensor for the manipulator, introduces the substrate, sensor, and flexible electrode materials, respectively, and summarizes the performance of different flexible sensors. From the perspective of manufacturing, it introduces their basic principles and compares their advantages and disadvantages. Then, according to the different ways of wearing, the two control methods of data glove control and surface EMG control are respectively introduced, the principle, control process, and detection accuracy are summarized, and the problems of material microstructure, reducing the cost, optimizing the circuit design and so on are emphasized in this field. Finally, the commercial application in this field is explained and the future research direction is proposed from two aspects: how to ensure real-time control and better receive the feedback signal from the manipulator.
Collapse
Affiliation(s)
| | | | | | - Binfeng Yin
- School of Mechanical Engineering, Yangzhou University, Huayangxi Road No. 196, Yangzhou 225127, China; (J.C.); (C.W.); (J.C.)
| |
Collapse
|
6
|
Bhardwaj S, Ghosh D, Dutta D, Cheduluri G, Hansigida V, Nali AR, Acharyya A. Low Complex CORDIC-based Hand Movement Recognition Design Methodology for Rehabilitation and Prosthetic Applications. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083540 DOI: 10.1109/embc40787.2023.10340238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Hand movement recognition using Electromyography (EMG) signals have gained much significance lately and is extensively used for rehabilitation and prosthetic applications including stroke-driven disability and other neuromuscular disorders. Herein, quantitative analysis of EMG signals is very crucial. However, such applications are constrained by power consumption limitations due to the battery backup necessitating low-complex system design and the on-chip area requirement. Existing hand movement recognition methodologies using single-channel EMG signal involve computationally intensive stages, including Ensemble Empirical Mode Decomposition (EEMD), Fast Independent Component Analysis (FastICA), feature extraction, and Linear Discriminant Analysis (LDA) classification, which can not be mapped onto the low-complex architecture directly from the algorithmic level. The high computational complexity of LDA classification makes it difficult to be used for low-complex applications. In this paper, we introduce a low-complex CORDIC-based hand movement recognition design methodology targeting resource-constrained rehabilitation applications. This work explores replacing LDA classification with K-Means clustering due to its reduced complexity and efficient clustering algorithm. CORDIC-based K-Means clustering is used to further reduce the overall computational complexity of the system. The proposed low complex, K-Means clustering-based hand movement recognition for classifying seven hand movements using single-channel EMG data is found to be 99.77 % less complex and 1.28% more accurate than the conventional LDA-based classification.
Collapse
|
7
|
Padilla-Magaña JF, Peña-Pitarch E. Classification Models of Action Research Arm Test Activities in Post-Stroke Patients Based on Human Hand Motion. SENSORS (BASEL, SWITZERLAND) 2022; 22:9078. [PMID: 36501779 PMCID: PMC9737603 DOI: 10.3390/s22239078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/17/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
The Action Research Arm Test (ARAT) presents a ceiling effect that prevents the detection of improvements produced with rehabilitation treatments in stroke patients with mild finger joint impairments. The aim of this study was to develop classification models to predict whether activities with similar ARAT scores were performed by a healthy subject or by a subject post-stroke using the extension and flexion angles of 11 finger joints as features. For this purpose, we used three algorithms: Support Vector Machine (SVM), Random Forest (RF), and K-Nearest Neighbors (KNN). The dataset presented class imbalance, and the classification models presented a low recall, especially in the stroke class. Therefore, we implemented class balance using Borderline-SMOTE. After data balancing the classification models showed significantly higher accuracy, recall, f1-score, and AUC. However, after data balancing, the SVM classifier showed a higher performance with a precision of 98%, a recall of 97.5%, and an AUC of 0.996. The results showed that classification models based on human hand motion features in combination with the oversampling algorithm Borderline-SMOTE achieve higher performance. Furthermore, our study suggests that there are differences in ARAT activities performed between healthy and post-stroke individuals that are not detected by the ARAT scoring process.
Collapse
|
8
|
Quantitative Assessment of Hand Function in Healthy Subjects and Post-Stroke Patients with the Action Research Arm Test. SENSORS 2022; 22:s22103604. [PMID: 35632013 PMCID: PMC9147783 DOI: 10.3390/s22103604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 04/22/2022] [Accepted: 05/02/2022] [Indexed: 11/17/2022]
Abstract
The Action Research Arm Test (ARAT) can provide subjective results due to the difficulty assessing abnormal patterns in stroke patients. The aim of this study was to identify joint impairments and compensatory grasping strategies in stroke patients with left (LH) and right (RH) hemiparesis. An experimental study was carried out with 12 patients six months after a stroke (three women and nine men, mean age: 65.2 ± 9.3 years), and 25 healthy subjects (14 women and 11 men, mean age: 40.2 ± 18.1 years. The subjects were evaluated during the performance of the ARAT using a data glove. Stroke patients with LH and RH showed significantly lower flexion angles in the MCP joints of the Index and Middle fingers than the Control group. However, RH patients showed larger flexion angles in the proximal interphalangeal (PIP) joints of the Index, Middle, Ring, and Little fingers. In contrast, LH patients showed larger flexion angles in the PIP joints of the Middle and Little fingers. Therefore, the results showed that RH and LH patients used compensatory strategies involving increased flexion at the PIP joints for decreased flexion in the MCP joints. The integration of a data glove during the performance of the ARAT allows the detection of finger joint impairments in stroke patients that are not visible from ARAT scores. Therefore, the results presented are of clinical relevance.
Collapse
|