1
|
Sensor Fault Reconstruction Using Robustly Adaptive Unknown-Input Observers. SENSORS (BASEL, SWITZERLAND) 2024; 24:3224. [PMID: 38794077 PMCID: PMC11125881 DOI: 10.3390/s24103224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 05/01/2024] [Accepted: 05/16/2024] [Indexed: 05/26/2024]
Abstract
Sensors are a key component in industrial automation systems. A fault or malfunction in sensors may degrade control system performance. An engineering system model is usually disturbed by input uncertainties, which brings a challenge for monitoring, diagnosis, and control. In this study, a novel estimation technique, called adaptive unknown-input observer, is proposed to simultaneously reconstruct sensor faults as well as system states. Specifically, the unknown input observer is used to decouple partial disturbances, the un-decoupled disturbances are attenuated by the optimization using linear matrix inequalities, and the adaptive technique is explored to track sensor faults. As a result, a robust reconstruction of the sensor fault as well as system states is then achieved. Furthermore, the proposed robustly adaptive fault reconstruction technique is extended to Lipschitz nonlinear systems subjected to sensor faults and unknown input uncertainties. Finally, the effectiveness of the algorithms is demonstrated using an aircraft system model and robotic arm and comparison studies.
Collapse
|
2
|
Research on Multi-Hole Localization Tracking Based on a Combination of Machine Vision and Deep Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:984. [PMID: 38339701 PMCID: PMC10857067 DOI: 10.3390/s24030984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 01/21/2024] [Accepted: 01/31/2024] [Indexed: 02/12/2024]
Abstract
In the process of industrial production, manual assembly of workpieces exists with low efficiency and high intensity, and some of the assembly process of the human body has a certain degree of danger. At the same time, traditional machine learning algorithms are difficult to adapt to the complexity of the current industrial field environment; the change in the environment will greatly affect the accuracy of the robot's work. Therefore, this paper proposes a method based on the combination of machine vision and the YOLOv5 deep learning model to obtain the disk porous localization information, after coordinate mapping by the ROS communication control robotic arm work, in order to improve the anti-interference ability of the environment and work efficiency but also reduce the danger to the human body. The system utilizes a camera to collect real-time images of targets in complex environments and, then, trains and processes them for recognition such that coordinate localization information can be obtained. This information is converted into coordinates under the robot coordinate system through hand-eye calibration, and the robot is then controlled to complete multi-hole localization and tracking by means of communication between the upper and lower computers. The results show that there is a high accuracy in the training and testing of the target object, and the control accuracy of the robotic arm is also relatively high. The method has strong anti-interference to the complex environment of industry and exhibits a certain feasibility and effectiveness. It lays a foundation for achieving the automated installation of docking disk workpieces in industrial production and also provides a more favorable choice for the production and installation of the process of screw positioning needs.
Collapse
|
3
|
Cost-utility analysis of robotic arm-assisted medial compartment knee arthroplasty. Bone Jt Open 2023; 4:889-899. [PMID: 37992738 PMCID: PMC10665097 DOI: 10.1302/2633-1462.411.bjo-2023-0090.r1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2023] Open
Abstract
Aims To perform an incremental cost-utility analysis and assess the impact of differential costs and case volume on the cost-effectiveness of robotic arm-assisted unicompartmental knee arthroplasty (rUKA) compared to manual (mUKA). Methods This was a five-year follow-up study of patients who were randomized to rUKA (n = 64) or mUKA (n = 65). Patients completed the EuroQol five-dimension questionnaire (EQ-5D) preoperatively, and at three months and one, two, and five years postoperatively, which was used to calculate quality-adjusted life years (QALYs) gained. Costs for the primary and additional surgery and healthcare costs were calculated. Results rUKA was associated with a relative 0.012 QALY gain at five years, which was associated with an incremental cost per QALY of £13,078 for a unit undertaking 400 cases per year. A cost per QALY of less than £20,000 was achieved when ≥ 300 cases were performed per year. However, on removal of the cost for a revision for presumed infection (mUKA group, n = 1) the cost per QALY was greater than £38,000, which was in part due to the increased intraoperative consumable costs associated with rUKA (£626 per patient). When the absolute cost difference (operative and revision costs) was less than £240, a cost per QALY of less than £20,000 was achieved. On removing the cost of the revision for infection, rUKA was cost-neutral when more than 900 cases per year were undertaken and when the consumable costs were zero. Conclusion rUKA was a cost-effective intervention with an incremental cost per QALY of £13,078 at five years, however when removing the revision for presumed infection, which was arguably a random event, this was no longer the case. The absolute cost difference had to be less than £240 to be cost-effective, which could be achieved by reducing the perioperative costs of rUKA or if there were increased revision costs associated with mUKA with longer follow-up.
Collapse
|
4
|
Sensor Fusion-Based Anthropomorphic Control of a Robotic Arm. Bioengineering (Basel) 2023; 10:1243. [PMID: 38002367 PMCID: PMC10669049 DOI: 10.3390/bioengineering10111243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 10/12/2023] [Accepted: 10/20/2023] [Indexed: 11/26/2023] Open
Abstract
The main goal of this research is to develop a highly advanced anthropomorphic control system utilizing multiple sensor technologies to achieve precise control of a robotic arm. Combining Kinect and IMU sensors, together with a data glove, we aim to create a multimodal sensor system for capturing rich information of human upper body movements. Specifically, the four angles of upper limb joints are collected using the Kinect sensor and IMU sensor. In order to improve the accuracy and stability of motion tracking, we use the Kalman filter method to fuse the Kinect and IMU data. In addition, we introduce data glove technology to collect the angle information of the wrist and fingers in seven different directions. The integration and fusion of multiple sensors provides us with full control over the robotic arm, giving it flexibility with 11 degrees of freedom. We successfully achieved a variety of anthropomorphic movements, including shoulder flexion, abduction, rotation, elbow flexion, and fine movements of the wrist and fingers. Most importantly, our experimental results demonstrate that the anthropomorphic control system we developed is highly accurate, real-time, and operable. In summary, the contribution of this study lies in the creation of a multimodal sensor system capable of capturing and precisely controlling human upper limb movements, which provides a solid foundation for the future development of anthropomorphic control technologies. This technology has a wide range of application prospects and can be used for rehabilitation in the medical field, robot collaboration in industrial automation, and immersive experience in virtual reality environments.
Collapse
|
5
|
Length of stay and discharge dispositions following robotic arm-assisted total knee arthroplasty and unicompartmental knee arthroplasty versus conventional technique and predictors of delayed discharge. Bone Jt Open 2023; 4:791-800. [PMID: 37852620 PMCID: PMC10614696 DOI: 10.1302/2633-1462.410.bjo-2023-0126.r1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2023] Open
Abstract
Aims In-hospital length of stay (LOS) and discharge dispositions following arthroplasty could act as surrogate measures for improvement in patient pathways, and have major cost saving implications for healthcare providers. With the ever-growing adoption of robotic technology in arthroplasty, it is imperative to evaluate its impact on LOS. The objectives of this study were to compare LOS and discharge dispositions following robotic arm-assisted total knee arthroplasty (RO TKA) and unicompartmental arthroplasty (RO UKA) versus conventional technique (CO TKA and UKA). Methods This large-scale, single-institution study included patients of any age undergoing primary TKA (n = 1,375) or UKA (n = 337) for any cause between May 2019 and January 2023. Data extracted included patient demographics, LOS, need for post anaesthesia care unit (PACU) admission, anaesthesia type, readmission within 30 days, and discharge dispositions. Univariate and multivariate logistic regression models were also employed to identify factors and patient characteristics related to delayed discharge. Results The median LOS in the RO TKA group was 76 hours (interquartile range (IQR) 54 to 104) versus 82.5 (IQR 58 to 127) in the CO TKA group (p < 0.001) and 54 hours (IQR 34 to 77) in the RO UKA versus 58 (IQR 35 to 81) in the CO UKA (p = 0.031). Discharge dispositions were comparable between the two groups. A higher percentage of patients undergoing CO TKA required PACU admission (8% vs 5.2%; p = 0.040). Conclusion Our study showed that robotic arm assistance was associated with a shorter LOS in patients undergoing primary UKA and TKA, and no difference in the discharge destinations. Our results suggest that robotic arm assistance could be advantageous in partly addressing the upsurge of knee arthroplasty procedures and the concomitant healthcare burden; however, this needs to be corroborated by long-term cost-effectiveness analyses and data from randomized controlled studies.
Collapse
|
6
|
Robotic Grasp Detection Network Based on Improved Deformable Convolution and Spatial Feature Center Mechanism. Biomimetics (Basel) 2023; 8:403. [PMID: 37754154 PMCID: PMC10527218 DOI: 10.3390/biomimetics8050403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 08/28/2023] [Accepted: 08/30/2023] [Indexed: 09/28/2023] Open
Abstract
In this article, we propose an effective grasp detection network based on an improved deformable convolution and spatial feature center mechanism (DCSFC-Grasp) to precisely grasp unidentified objects. DCSFC-Grasp includes three key procedures as follows. First, improved deformable convolution is introduced to adaptively adjust receptive fields for multiscale feature information extraction. Then, an efficient spatial feature center (SFC) layer is explored to capture the global remote dependencies through a lightweight multilayer perceptron (MLP) architecture. Furthermore, a learnable feature center (LFC) mechanism is reported to gather local regional features and preserve the local corner region. Finally, a lightweight CARAFE operator is developed to upsample the features. Experimental results show that DCSFC-Grasp achieves a high accuracy (99.3% and 96.1% for the Cornell and Jacquard grasp datasets, respectively) and even outperforms the existing state-of-the-art grasp detection models. The results of real-world experiments on the six-DoF Realman RM65 robotic arm further demonstrate that our DCSFC-Grasp is effective and robust for the grasping of unknown targets.
Collapse
|
7
|
CBMC: A Biomimetic Approach for Control of a 7-Degree of Freedom Robotic Arm. Biomimetics (Basel) 2023; 8:389. [PMID: 37754140 PMCID: PMC10526988 DOI: 10.3390/biomimetics8050389] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 08/23/2023] [Accepted: 08/24/2023] [Indexed: 09/28/2023] Open
Abstract
Many approaches inspired by brain science have been proposed for robotic control, specifically targeting situations where knowledge of the dynamic model is unavailable. This is crucial because dynamic model inaccuracies and variations can occur during the robot's operation. In this paper, inspired by the central nervous system (CNS), we present a CNS-based Biomimetic Motor Control (CBMC) approach consisting of four modules. The first module consists of a cerebellum-like spiking neural network that employs spiking timing-dependent plasticity to learn the dynamics mechanisms and adjust the synapses connecting the spiking neurons. The second module constructed using an artificial neural network, mimicking the regulation ability of the cerebral cortex to the cerebellum in the CNS, learns by reinforcement learning to supervise the cerebellum module with instructive input. The third and last modules are the cerebral sensory module and the spinal cord module, which deal with sensory input and provide modulation to torque commands, respectively. To validate our method, CBMC was applied to the trajectory tracking control of a 7-DoF robotic arm in simulation. Finally, experiments are conducted on the robotic arm using various payloads, and the results of these experiments clearly demonstrate the effectiveness of the proposed methodology.
Collapse
|
8
|
Applying an Artificial Neuromolecular System to the Application of Robotic Arm Motion Control in Assisting the Rehabilitation of Stroke Patients-An Artificial World Approach. Biomimetics (Basel) 2023; 8:385. [PMID: 37754136 PMCID: PMC10526234 DOI: 10.3390/biomimetics8050385] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/28/2023] Open
Abstract
Stroke patients cannot use their hands as freely as usual. However, recovery after a stroke is a long road for many patients. If artificial intelligence can assist human arm movement, it is believed that the possibility of stroke patients returning to normal hand movement can be significantly increased. In this study, the artificial neuromolecular system (ANM system) developed by our laboratory is used as the core motion control system to learn to control the mechanical arm, produce similar human rehabilitation actions, and assist patients in transiting between different activities. The strength of the ANM system lies in its ability to capture and process spatiotemporal information by exploiting the dynamic information processing inside neurons. Five experiments are conducted in this research: continuous learning, dimensionality reduction, moving problem domains, transfer learning, and fault tolerance. The results show that the ANM system can find out the arm movement trajectory when people perform different rehabilitation actions through the ability of continuous learning and reduce the activation of multiple muscle groups in stroke patients through the learning method of reducing dimensions. Finally, using the ANM system can reduce the learning time and performance required to switch between different actions through transfer learning.
Collapse
|
9
|
Vision-Based Jigsaw Puzzle Solving with a Robotic Arm. SENSORS (BASEL, SWITZERLAND) 2023; 23:6913. [PMID: 37571693 PMCID: PMC10422444 DOI: 10.3390/s23156913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/31/2023] [Accepted: 08/01/2023] [Indexed: 08/13/2023]
Abstract
This study proposed two algorithms for reconstructing jigsaw puzzles by using a color compatibility feature. Two realistic application cases were examined: one involved using the original image, while the other did not. We also calculated the transformation matrix to obtain the real positions of each puzzle piece and transmitted the positional information to the robotic arm, which then put each puzzle piece in its correct position. The algorithms were tested on 35-piece and 70-piece puzzles, achieving an average success rate of 87.1%. Compared with the human visual system, the proposed methods demonstrated enhanced accuracy when handling more complex textural images.
Collapse
|
10
|
Target Detection-Based Control Method for Archive Management Robot. SENSORS (BASEL, SWITZERLAND) 2023; 23:5343. [PMID: 37300070 PMCID: PMC10256058 DOI: 10.3390/s23115343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 05/30/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023]
Abstract
With increasing demand for efficient archive management, robots have been employed in paper-based archive management for large, unmanned archives. However, the reliability requirements of such systems are high due to their unmanned nature. To address this, this study proposes a paper archive access system with adaptive recognition for handling complex archive box access scenarios. The system comprises a vision component that employs the YOLOV5 algorithm to identify feature regions, sort and filter data, and to estimate the target center position, as well as a servo control component. This study proposes a servo-controlled robotic arm system with adaptive recognition for efficient paper-based archive management in unmanned archives. The vision part of the system employs the YOLOV5 algorithm to identify feature regions and to estimate the target center position, while the servo control part uses closed-loop control to adjust posture. The proposed feature region-based sorting and matching algorithm enhances accuracy and reduces the probability of shaking by 1.27% in restricted viewing scenarios. The system is a reliable and cost-effective solution for paper archive access in complex scenarios, and the integration of the proposed system with a lifting device enables the effective storage and retrieval of archive boxes of varying heights. However, further research is necessary to evaluate its scalability and generalizability. The experimental results demonstrate the effectiveness of the proposed adaptive box access system for unmanned archival storage. The system exhibits a higher storage success rate than existing commercial archival management robotic systems. The integration of the proposed system with a lifting device provides a promising solution for efficient archive management in unmanned archival storage. Future research should focus on evaluating the system's performance and scalability.
Collapse
|
11
|
Abstract
Aims Computer-assisted 3D preoperative planning software has the potential to improve postoperative stability in total hip arthroplasty (THA). Commonly, preoperative protocols simulate two functional positions (standing and relaxed sitting) but do not consider other common positions that may increase postoperative impingement and possible dislocation. This study investigates the feasibility of simulating commonly encountered positions, and positions with an increased risk of impingement, to lower postoperative impingement risk in a CT-based 3D model. Methods A robotic arm-assisted arthroplasty planning platform was used to investigate 11 patient positions. Data from 43 primary THAs were used for simulation. Sacral slope was retrieved from patient preoperative imaging, while angles of hip flexion/extension, hip external/internal rotation, and hip abduction/adduction for tested positions were derived from literature or estimated with a biomechanical model. The hip was placed in the described positions, and if impingement was detected by the software, inspection of the impingement type was performed. Results In flexion, an overall impingement rate of 2.3% was detected for flexed-seated, squatting, forward-bending, and criss-cross-sitting positions, and 4.7% for the ankle-over-knee position. In extension, most hips (60.5%) were found to impinge at or prior to 50° of external rotation (pivoting). Many of these impingement events were due to a prominent ischium. The mean maximum external rotation prior to impingement was 45.9° (15° to 80°) and 57.9° (20° to 90°) prior to prosthetic impingement. No impingement was found in standing, sitting, crossing ankles, seiza, and downward dog. Conclusion This study demonstrated that positions of daily living tested in a CT-based 3D model show high rates of impingement. Simulating additional positions through 3D modelling is a low-cost method of potentially improving outcomes without compromising patient safety. By incorporating CT-based 3D modelling of positions of daily living into routine preoperative protocols for THA, there is the potential to lower the risk of postoperative impingement events.
Collapse
|
12
|
Robotic-assisted percutaneous pelvis fixation: A case report. Clin Case Rep 2023; 11:e7527. [PMID: 37323256 PMCID: PMC10264929 DOI: 10.1002/ccr3.7527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 04/05/2023] [Accepted: 05/25/2023] [Indexed: 06/17/2023] Open
Abstract
Key Clinical Message It may be possible to extend the use of the robotic arm to pelvic and acetabular surgery leading to safe, repeatable screw placement, and less radiation exposure for patients, surgeons and OR staff. Abstract In this case, a novel, robotic-assisted technique was used to place a sacroiliac screw in a patient with unstable injuries of the pelvic ring. Intraoperative and postoperative fluoroscopic, radiographic, and CT imaging demonstrated a safely positioned 6.5 mm cannulated screw without unplanned cortical violation or impingement on neurovascular structures. To our knowledge, this is the first such reported case using a robot widely available in the Americas or Europe.
Collapse
|
13
|
Viewpoint-Controllable Telepresence: A Robotic-Arm-Based Mixed-Reality Telecollaboration System. SENSORS (BASEL, SWITZERLAND) 2023; 23:4113. [PMID: 37112455 PMCID: PMC10145150 DOI: 10.3390/s23084113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 04/14/2023] [Accepted: 04/18/2023] [Indexed: 06/19/2023]
Abstract
In mixed-reality (MR) telecollaboration, the local environment is remotely presented to a remote user wearing a virtual reality (VR) head-mounted display (HMD) via a video capture device. However, remote users frequently face challenges in naturally and actively manipulating their viewpoints. In this paper, we propose a telepresence system with viewpoint control, which involves a robotic arm equipped with a stereo camera in the local environment. This system enables remote users to actively and flexibly observe the local environment by moving their heads to manipulate the robotic arm. Additionally, to solve the problem of the limited field of view of the stereo camera and limited movement range of the robotic arm, we propose a 3D reconstruction method combined with a stereo video field-of-view enhancement technique to guide remote users to move within the movement range of the robotic arm and provide them with a larger range of local environment perception. Finally, a mixed-reality telecollaboration prototype was built, and two user studies were conducted to evaluate the overall system. User study A evaluated the interaction efficiency, system usability, workload, copresence, and user satisfaction of our system from the remote user's perspective, and the results showed that our system can effectively improve the interaction efficiency while achieving a better user experience than two traditional view-sharing techniques based on 360 video and based on the local user's first-person view. User study B evaluated our MR telecollaboration system prototype from both the remote-user side and the local-user side as a whole, providing directions and suggestions for the subsequent design and improvement of our mixed-reality telecollaboration system.
Collapse
|
14
|
Research on force and position control performance of the tendon sheath system with time-varying parameters and flexible robotic arms. Int J Med Robot 2023:e2517. [PMID: 37042101 DOI: 10.1002/rcs.2517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 03/21/2023] [Accepted: 03/30/2023] [Indexed: 04/13/2023]
Abstract
BACKGROUND The tendon-sheath-system (TSS) is an excellent medium for remote power transmission, which is widely used in laparoscopic surgery robots. Since the operation process requires the robot to move continuously, this time-varying characteristic further aggravates the force and position transmission loss caused by the nonlinear friction of TSS, which affects the control accuracy of the surgical robot. METHOD A time-varying tendon-sheath transmission model (RT model) is proposed. A feedforward control system is designed to improve tendon-sheath transmission accuracy. Furthermore, a tendon-sheath transmission model with velocity characteristics (RV model) is established. RESULT Force, position, and velocity experiments were carried out on the platform of TSS with a robotic arm. The results show that the R-square values of force and position compensation are at least 96.57% and 99.16%. CONCLUSION The proposed RT and RV models are effective in compensating for the TSS transmission loss during the operation of the surgical robot.
Collapse
|
15
|
Multi-triangles cylindrical origami and inspired metamaterials with tunable stiffness and stretchable robotic arm. PNAS NEXUS 2023; 2:pgad098. [PMID: 37065617 PMCID: PMC10096905 DOI: 10.1093/pnasnexus/pgad098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 02/14/2023] [Accepted: 03/13/2023] [Indexed: 04/18/2023]
Abstract
Kresling pattern origami-inspired structural design has been widely investigated using its bistable property and the single coupling degree of freedom (DOF). In order to obtain new properties or new origami-inspired structures, it needs to innovate the crease lines in the flat sheet of Kresling pattern origami. Here, we present a derivative of Kresling pattern origami-multi-triangles cylindrical origami (MTCO) with tristable property. The truss model is modified based on the switchable active crease lines during the folding motion of the MTCO. Using the energy landscape obtained from the modified truss model, the tristable property is validated and extended to Kresling pattern origami. Simultaneously, the high stiffness property of the third stable state and some special stable states are discussed. In addition, MTCO-inspired metamaterials with deployable property and tunable stiffness, and MTCO-inspired robotic arms with wide movement ranges and rich motion forms are created. These works promote research on Kresling pattern origami, and the design ideas of the metamaterials and robotic arms play a positive role in improving the stiffness of deployable structures and conceiving motion robots.
Collapse
|
16
|
Robotic arm use for upper limb rehabilitation after stroke: A systematic review and meta-analysis. Kaohsiung J Med Sci 2023; 39:435-445. [PMID: 36999894 DOI: 10.1002/kjm2.12679] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 02/28/2023] [Accepted: 03/10/2023] [Indexed: 04/01/2023] Open
Abstract
Several studies have reported the effects of robotic arms on improving upper limb function in patients with stroke. However, previous studies have reported inconsistent findings that may lead to incorrect applications of robotic arm use. Six databases were searched for relevant randomized controlled trials. Meta-analyses were performed for upper limb performance measures, including subgroup analysis of pooled upper limb rehabilitation data such as stroke stage and intervention delivery dose. Furthermore, the Cochrane risk-of-bias tool for randomized trials version 2 (RoB 2) and sensitivity analysis were used to assess methodology and determine publication bias. The final analysis included 18 studies. Robotic arms improved upper limb and hand function in patients with stroke. Subgroup analysis revealed that robotic arm interventions lasting 30-60 min per session significantly improved upper limb function. However, no significant improvement was observed in shoulder and elbow or wrist and hand movements. This review may help develop applicable rehabilitation robots and collaboration between clinicians.
Collapse
|
17
|
Novel Robotic Arm Working-Area AI Protection System. SENSORS (BASEL, SWITZERLAND) 2023; 23:2765. [PMID: 36904969 PMCID: PMC10007316 DOI: 10.3390/s23052765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 02/22/2023] [Accepted: 02/28/2023] [Indexed: 06/18/2023]
Abstract
From traditionally handmade items to the ability of people to use machines to process and even to human-robot collaboration, there are many risks. Traditional manual lathes and milling machines, sophisticated robotic arms, and computer numerical control (CNC) operations are quite dangerous. To ensure the safety of workers in automated factories, a novel and efficient warning-range algorithm is proposed to determine whether a person is in the warning range, introducing YOLOv4 tiny-object detection algorithms to improve the accuracy of determining objects. The results are displayed on a stack light and sent through an M-JPEG streaming server so that the detected image can be displayed through the browser. According to the experimental results of this system installed on a robotic arm workstation, it is proved that it can ensure recognition reaches 97%. When a person enters the dangerous range of the working robotic arm, the arm can be stopped within about 50 ms, which will effectively improve the safety of its use.
Collapse
|
18
|
An enhanced deep deterministic policy gradient algorithm for intelligent control of robotic arms. Front Neuroinform 2023; 17:1096053. [PMID: 36756212 PMCID: PMC9899791 DOI: 10.3389/fninf.2023.1096053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 01/02/2023] [Indexed: 01/24/2023] Open
Abstract
Aiming at the poor robustness and adaptability of traditional control methods for different situations, the deep deterministic policy gradient (DDPG) algorithm is improved by designing a hybrid function that includes different rewards superimposed on each other. In addition, the experience replay mechanism of DDPG is also improved by combining priority sampling and uniform sampling to accelerate the DDPG's convergence. Finally, it is verified in the simulation environment that the improved DDPG algorithm can achieve accurate control of the robot arm motion. The experimental results show that the improved DDPG algorithm can converge in a shorter time, and the average success rate in the robotic arm end-reaching task is as high as 91.27%. Compared with the original DDPG algorithm, it has more robust environmental adaptability.
Collapse
|
19
|
Robot-Aided Magnetic Navigation System for Wireless Capsule Manipulation. MICROMACHINES 2023; 14:269. [PMID: 36837968 PMCID: PMC9964025 DOI: 10.3390/mi14020269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/12/2023] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
Magnetic navigation systems (MNSs) have been developed to use in the diagnosis of gastrointestinal problems. However, most conventional magnetic navigation systems are expensive and have structural problems because of their large weights and volumes. Therefore, this paper proposes C-Mag, a novel compact MNS composed of two electromagnets and a robotic arm. The two electromagnets generate a planar magnetic field, and the robotic arm rotates and translates the electromagnets to manipulate the magnetic capsule in a large 3-dimensional (3-D) space. The C-Mag design considers the payload of the robotic arm and the capacity of the power supply unit. Under these limited conditions, the C-Mag was optimized to generate the maximum magnetic field considering several major factors. Finally, the C-Mag was constructed, and the maximum magnetic field that could be generated in one direction was 18.65 mT in the downward direction. Additionally, the maximum rotating magnetic field was 13.21 mT, which was used to manipulate the capsule. The performance was verified by measuring the generated magnetic field, and it matched well with the simulated result. Additionally, the path-following experiment of the magnetic capsule showed that the proposed C-Mag can effectively manipulate the magnetic capsule in 3-D space using the robotic arm. This study is expected to contribute to the further development of magnetic navigation systems to treat gastrointestinal problems.
Collapse
|
20
|
Deep Instance Segmentation and Visual Servoing to Play Jenga with a Cost-Effective Robotic System. SENSORS (BASEL, SWITZERLAND) 2023; 23:752. [PMID: 36679543 PMCID: PMC9866192 DOI: 10.3390/s23020752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 01/04/2023] [Accepted: 01/06/2023] [Indexed: 06/17/2023]
Abstract
The game of Jenga is a benchmark used for developing innovative manipulation solutions for complex tasks. Indeed, it encourages the study of novel robotics methods to successfully extract blocks from a tower. A Jenga game involves many traits of complex industrial and surgical manipulation tasks, requiring a multi-step strategy, the combination of visual and tactile data, and the highly precise motion of a robotic arm to perform a single block extraction. In this work, we propose a novel, cost-effective architecture for playing Jenga with e.Do, a 6DOF anthropomorphic manipulator manufactured by Comau, a standard depth camera, and an inexpensive monodirectional force sensor. Our solution focuses on a visual-based control strategy to accurately align the end-effector with the desired block, enabling block extraction by pushing. To this aim, we trained an instance segmentation deep learning model on a synthetic custom dataset to segment each piece of the Jenga tower, allowing for visual tracking of the desired block's pose during the motion of the manipulator. We integrated the visual-based strategy with a 1D force sensor to detect whether the block could be safely removed by identifying a force threshold value. Our experimentation shows that our low-cost solution allows e.DO to precisely reach removable blocks and perform up to 14 consecutive extractions in a row.
Collapse
|
21
|
Enhanced Authenticated Key Agreement for Surgical Applications in a Tactile Internet Environment. SENSORS (BASEL, SWITZERLAND) 2022; 22:7941. [PMID: 36298289 PMCID: PMC9612115 DOI: 10.3390/s22207941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/08/2022] [Accepted: 10/13/2022] [Indexed: 06/16/2023]
Abstract
The Tactile Internet enables physical touch to be transmitted over the Internet. In the context of electronic medicine, an authenticated key agreement for the Tactile Internet allows surgeons to perform operations via robotic systems and receive tactile feedback from remote patients. The fifth generation of networks has completely changed the network space and has increased the efficiency of the Tactile Internet with its ultra-low latency, high data rates, and reliable connectivity. However, inappropriate and insecure authentication key agreements for the Tactile Internet may cause misjudgment and improper operation by medical staff, endangering the life of patients. In 2021, Kamil et al. developed a novel and lightweight authenticated key agreement scheme that is suitable for remote surgery applications in the Tactile Internet environment. However, their scheme directly encrypts communication messages with constant secret keys and directly stores secret keys in the verifier table, making the scheme vulnerable to possible attacks. Therefore, in this investigation, we discuss the limitations of the scheme proposed by Kamil scheme and present an enhanced scheme. The enhanced scheme is developed using a one-time key to protect communication messages, whereas the verifier table is protected with a secret gateway key to mitigate the mentioned limitations. The enhanced scheme is proven secure against possible attacks, providing more security functionalities than similar schemes and retaining a lightweight computational cost.
Collapse
|
22
|
IoT-Based Fish Farm Water Quality Monitoring System. SENSORS (BASEL, SWITZERLAND) 2022; 22:6700. [PMID: 36081159 PMCID: PMC9460614 DOI: 10.3390/s22176700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 08/13/2022] [Accepted: 08/15/2022] [Indexed: 06/15/2023]
Abstract
Typhoons in summer and cold snaps during winter in Taiwan often cause huge aquaculture losses. Simultaneously, the lack of human resources is a problem. Therefore, we used wireless transmission technology with various sensors to transmit the temperature, pH value, dissolved oxygen, water level, and life expectancy of the sensor in the fish farm to the server. The integrated data are transmitted to mobile devices through the Internet of Things, enabling administrators to monitor the water quality in a fish farm through mobile devices. Because the current pH sensors cannot be submerged in the liquid for a long time for measurements, human resources and time are required to take the instrument to each fish farm for testing at a fixed time. Therefore, a robotic arm was developed to complete automatic measurement and maintenance actions. We designed this arm with a programmable logic controller, a single chip combined with a wireless transmission module, and an embedded system. This system is divided into control, measurement, server, and mobility. The intelligent measurement equipment designed in this study can work 24 h per day, which effectively reduces the losses caused by personnel, material resources, and data errors.
Collapse
|
23
|
Know Your Movements: Poorer Proprioceptive Accuracy is Associated With Overprotective Avoidance Behavior. THE JOURNAL OF PAIN 2022; 23:1400-1409. [PMID: 35341984 DOI: 10.1016/j.jpain.2022.03.233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 02/23/2022] [Accepted: 03/05/2022] [Indexed: 06/14/2023]
Abstract
Pain-related avoidance of movements that are actually safe (ie, overprotective behavior) plays a key role in chronic pain disability. Avoidance is reinforced through operant learning: after learning that a certain movement elicits pain, movements that prevent pain are more likely to be performed. Proprioceptive accuracy importantly contributes to motor learning and memory. Interestingly, reduced accuracy has been documented in various chronic pain conditions, prompting the question whether this relates to avoidance becoming excessive. Using robotic arm-reaching movements, we tested the hypothesis that poor proprioceptive accuracy is associated with excessive pain-related avoidance in pain-free participants. Participants first performed a task to assess proprioceptive accuracy, followed by an operant avoidance training during which a pain stimulus was presented when they performed one movement trajectory, but not when they performed another trajectory. During a test phase, movements were no longer restricted to 2 trajectories, but participants were instructed to avoid pain. Unbeknownst to the participants, the pain stimulus was never presented during this phase. Results supported our hypothesis. Furthermore, exploratory analyses indicated a reduction in proprioceptive accuracy after avoidance learning, which was associated with excessive avoidance and higher trait fear of pain. PERSPECTIVE: This study is the first to show that poorer proprioceptive accuracy is associated with excessive pain-related avoidance. This finding is especially relevant for chronic pain conditions, as reduced accuracy has been documented in these populations, and points toward the need for research on training accuracy to tackle excessive avoidance.
Collapse
|
24
|
Are there functional biomechanical differences in robotic arm-assisted bi-unicompartmental knee arthroplasty compared with conventional total knee arthroplasty? A prospective, randomized controlled trial. Bone Joint J 2022; 104-B:433-443. [PMID: 35360949 DOI: 10.1302/0301-620x.104b4.bjj-2021-0837.r1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AIMS The aim of this study was to compare any differences in the primary outcome (biphasic flexion knee moment during gait) of robotic arm-assisted bi-unicompartmental knee arthroplasty (bi-UKA) with conventional mechanically aligned total knee arthroplasty (TKA) at one year post-surgery. METHODS A total of 76 patients (34 bi-UKA and 42 TKA patients) were analyzed in a prospective, single-centre, randomized controlled trial. Flat ground shod gait analysis was performed preoperatively and one year postoperatively. Knee flexion moment was calculated from motion capture markers and force plates. The same setup determined proprioception outcomes during a joint position sense test and one-leg standing. Surgery allocation, surgeon, and secondary outcomes were analyzed for prediction of the primary outcome from a binary regression model. RESULTS Both interventions were shown to be effective treatment options, with no significant differences shown between interventions for the primary outcome of this study (18/35 (51.4%) biphasic TKA patients vs 20/31 (64.5%) biphasic bi-UKA patients; p = 0.558). All outcomes were compared to an age-matched, healthy cohort that outperformed both groups, indicating residual deficits exists following surgery. Logistic regression analysis of primary outcome with secondary outcomes indicated that the most significant predictor of postoperative biphasic knee moments was preoperative knee moment profile and trochlear degradation (Outerbridge) (R2 = 0.381; p = 0.002, p = 0.046). A separate regression of alignment against primary outcome indicated significant bi-UKA femoral and tibial axial alignment (R2 = 0.352; p = 0.029), and TKA femoral sagittal alignment (R2 = 0.252; p = 0.016). The bi-UKA group showed a significant increased ability in the proprioceptive joint position test, but no difference was found in more dynamic testing of proprioception. CONCLUSION Robotic arm-assisted bi-UKA demonstrated equivalence to TKA in achieving a biphasic gait pattern after surgery for osteoarthritis of the knee. Both treatments are successful at improving gait, but both leave the patients with a functional limitation that is not present in healthy age-matched controls. Cite this article: Bone Joint J 2022;103-B(4):433-443.
Collapse
|
25
|
Using Artificial Neuro-Molecular System in Robotic Arm Motion Control-Taking Simulation of Rehabilitation as an Example. SENSORS (BASEL, SWITZERLAND) 2022; 22:2584. [PMID: 35408198 PMCID: PMC9003313 DOI: 10.3390/s22072584] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 03/23/2022] [Accepted: 03/26/2022] [Indexed: 01/18/2023]
Abstract
Under the delicate control of the brain, people can perform graceful movements through the coordination of muscles, bones, ligaments, and joints. If artificial intelligence can be used to establish a control system that simulates the movements of human arms, it is believed that the application scope of robotic arms in assisting people's daily life can be greatly increased. The purpose of this study is to build a general system that can use intelligent techniques to assist in the construction of a personalized rehabilitation system. More importantly, this research hopes to establish an intelligent system that can be adjusted according to the needs of the problem domain, that is, the system can move toward the direction of problem-solving through autonomous learning. The artificial neural molecular system (ANM system), developed early in our laboratory, which captured the close structure/function relationship of biological systems, was used. The system was operated on the V-REP (Virtual Robot Experimentation Platform). The results show that the ANM system can use self-learning methods to adjust the start-up time, rotation angle, and the sequence of the motor operation of different motors in order to complete the designated task assignment.
Collapse
|
26
|
Automated Manipulation of Miniature Objects Underwater Using Air Capillary Bridges: Pick-and-Place, Surface Cleaning, and Underwater Origami. ACS APPLIED MATERIALS & INTERFACES 2022; 14:9855-9863. [PMID: 35080367 PMCID: PMC8874901 DOI: 10.1021/acsami.1c23845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
Various insects can entrap and stabilize air plastrons and bubbles underwater. When these bubbles interact with surfaces underwater, they create air capillary bridges that de-wet surfaces and even allow underwater reversible adhesion. In this study, a robotic arm with interchangeable three-dimensional (3D)-printed bubble-stabilizing units is used to create air capillary bridges underwater for manipulation of small objects. Particles of various sizes and shapes, thin sheets and substrates of diverse surface tensions, from hydrophilic to superhydrophobic, can be lifted, transported, placed, and oriented using one- or two-dimensional arrays of bubbles. Underwater adhesion, derived from the air capillary bridges, is quantified depending on the number, arrangement, and size of bubbles and the contact angle of the counter surface. This includes a variety of commercially available materials and chemically modified surfaces. Overall, it is possible to manipulate millimeter- to sub-millimeter-scale objects underwater. This includes cleaning submerged surfaces from colloids and arbitrary contaminations, folding thin sheets to create three-dimensional structures, and precisely placing and aligning objects of various geometries. The robotic underwater manipulator can be used for automation and control in cell culture experiments, lab-on-chip devices, and manipulation of objects underwater. It offers the ability to control the transport and release of small objects without the need for chemical adhesives, suction-based adhesion, anchoring devices, or grabbers.
Collapse
|
27
|
Design and System Considerations for Construction-Scale Concrete Additive Manufacturing in Remote Environments via Robotic Arm Deposition. 3D PRINTING AND ADDITIVE MANUFACTURING 2022; 9:35-45. [PMID: 36660139 PMCID: PMC9831536 DOI: 10.1089/3dp.2020.0335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
This work explores additive manufacturing (AM) of concrete by using a six-axis robotic arm and its use in large-scale, autonomous concrete construction. Concrete AM uses an extrusion method to deposit concrete beads in layers to create a three-dimensional (3D) shape. This method has been found to have many uses and advantages in construction applications. The lack of formwork and autonomous nature of this manufacturing method allows for new geometries and materials to be printed in unsafe or challenging environments. Autonomous construction has been suggested as a method of creating habitats in rapid-response scenarios. This article discusses research toward one such system that could be used to rapidly construct necessary habitats in response to low-resource and emergency situations. This required addressing certain limitations of a six-axis robotic arm platform along with overcoming system challenges to achieve deliverables for NASA's "3D Printed Habitat Challenge." This included system design to increase the build volume, integrate embedding, print non-coplanar sections, and minimize travel moves to address the challenges associated with continuous extrusion of cementitious material. The system was demonstrated by printing a one-third scale habitat, which represents the first 3d-printed fully enclosed structure at an architectural scale without the use of support.
Collapse
|
28
|
Data-driven artificial and spiking neural networks for inverse kinematics in neurorobotics. PATTERNS (NEW YORK, N.Y.) 2022; 3:100391. [PMID: 35079712 PMCID: PMC8767299 DOI: 10.1016/j.patter.2021.100391] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/06/2021] [Accepted: 10/21/2021] [Indexed: 11/26/2022]
Abstract
Inverse kinematics is fundamental for computational motion planning. It is used to derive an appropriate state in a robot's configuration space, given a target position in task space. In this work, we investigate the performance of fully connected and residual artificial neural networks as well as recurrent, learning-based, and deep spiking neural networks for conventional and geometrically constrained inverse kinematics. We show that while highly parameterized data-driven neural networks with tens to hundreds of thousands of parameters exhibit sub-ms inference time and sub-mm accuracy, learning-based spiking architectures can provide reasonably good results with merely a few thousand neurons. Moreover, we show that spiking neural networks can perform well in geometrically constrained task space, even when configured to an energy-conserved spiking rate, demonstrating their robustness. Neural networks were evaluated on NVIDIA's Xavier and Intel's neuromorphic Loihi chip.
Collapse
|
29
|
Comparison of Robotic and Conventional Unicompartmental Knee Arthroplasty Outcomes in Patients with Osteoarthritis: A Retrospective Cohort Study. J Clin Med 2021; 11:220. [PMID: 35011960 PMCID: PMC8745819 DOI: 10.3390/jcm11010220] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 12/20/2021] [Accepted: 12/29/2021] [Indexed: 11/17/2022] Open
Abstract
Robotic-arm-assisted unicompartmental knee arthroplasty (RUKA) was developed to increase the accuracy of bone alignment and implant positioning. This retrospective study explored whether RUKA has more favorable overall outcomes than conventional unicompartmental knee arthroplasty (CUKA). A total of 158 patients with medial compartment osteoarthritis were recruited, of which 85 had undergone RUKA with the Mako system and 73 had undergone CUKA. The accuracy of component positioning and bone anatomical alignment was compared using preoperative and postoperative radiograph. Clinical outcomes were evaluated using questionnaires, which the patients completed preoperatively and then postoperatively at six months, one year, and two years. In total, 52 patients from the RUKA group and 61 from the CUKA group were eligible for analysis. The preoperative health scores and Kellgren-Lawrence scores were higher in the RUKA group. RUKA exhibited higher implant positioning accuracy, thus providing a superior femoral implant angle, properly aligned implant placement, and a low rate of overhang. RUKA also achieved higher accuracy in bone anatomical alignment (tibial axis angle and anatomical axis angle) than CUKA, but surgical time was longer, and blood loss was greater. No significant differences were observed in the clinical outcomes of the two procedures.
Collapse
|
30
|
Tibiofemoral dynamic stressed gap laxities correlate with compartment load measurements in robotic arm-assisted total knee arthroplasty. Bone Jt Open 2021; 2:974-980. [PMID: 34818899 PMCID: PMC8636298 DOI: 10.1302/2633-1462.211.bjo-2021-0066.r1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Aims It is unknown whether gap laxities measured in robotic arm-assisted total knee arthroplasty (TKA) correlate to load sensor measurements. The aim of this study was to determine whether symmetry of the maximum medial and lateral gaps in extension and flexion was predictive of knee balance in extension and flexion respectively using different maximum thresholds of intercompartmental load difference (ICLD) to define balance. Methods A prospective cohort study of 165 patients undergoing functionally-aligned TKA was performed (176 TKAs). With trial components in situ, medial and lateral extension and flexion gaps were measured using robotic navigation while applying valgus and varus forces. The ICLD between medial and lateral compartments was measured in extension and flexion with the load sensor. The null hypothesis was that stressed gap symmetry would not correlate directly with sensor-defined soft tissue balance. Results In TKAs with a stressed medial-lateral gap difference of ≤1 mm, 147 (89%) had an ICLD of ≤15 lb in extension, and 112 (84%) had an ICLD of ≤ 15 lb in flexion; 157 (95%) had an ICLD ≤ 30 lb in extension, and 126 (94%) had an ICLD ≤ 30 lb in flexion; and 165 (100%) had an ICLD ≤ 60 lb in extension, and 133 (99%) had an ICLD ≤ 60 lb in flexion. With a 0 mm difference between the medial and lateral stressed gaps, 103 (91%) of TKA had an ICLD ≤ 15 lb in extension, decreasing to 155 (88%) when the difference between the medial and lateral stressed extension gaps increased to ± 3 mm. In flexion, 47 (77%) had an ICLD ≤ 15 lb with a medial-lateral gap difference of 0 mm, increasing to 147 (84%) at ± 3 mm. Conclusion This study found a strong relationship between intercompartmental loads and gap symmetry in extension and flexion measured with prostheses in situ. The results suggest that ICLD and medial-lateral gap difference provide similar assessment of soft-tissue balance in robotic arm-assisted TKA. Cite this article: Bone Jt Open 2021;2(11):974–980.
Collapse
|
31
|
Adaptive asynchronous control system of robotic arm based on augmented reality-assisted brain-computer interface. J Neural Eng 2021; 18. [PMID: 34654000 DOI: 10.1088/1741-2552/ac3044] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 10/15/2021] [Indexed: 11/12/2022]
Abstract
Objective. Brain-controlled robotic arms have shown broad application prospects with the development of robotics, science and information decoding. However, disadvantages, such as poor flexibility restrict its wide application.Approach. In order to alleviate these drawbacks, this study proposed a robotic arm asynchronous control system based on steady-state visual evoked potential (SSVEP) in an augmented reality (AR) environment. In the AR environment, the participants were able to concurrently see the robot arm and visual stimulation interface through the AR device. Therefore, there was no need to switch attention frequently between the visual stimulation interface and the robotic arm. This study proposed a multi-template algorithm based on canonical correlation analysis and task-related component analysis to identify 12 targets. An optimization strategy based on dynamic window was adopted to adjust the duration of visual stimulation adaptively.Main results. Experimental results of this study found that the high-frequency SSVEP-based brain-computer interface (BCI) realized the switch of the system state, which controlled the robotic arm asynchronously. The average accuracy of the offline experiment was 94.97%, whereas the average information translate rate was 67.37 ± 14.27 bits·min-1. The online results from ten healthy subjects showed that the average selection time of a single online command was 2.04 s, which effectively reduced the visual fatigue of the subjects. Each subject could quickly complete the puzzle task.Significance. The experimental results demonstrated the feasibility and potential of this human-computer interaction strategy and provided new ideas for BCI-controlled robots.
Collapse
|
32
|
Mechanical Valves for On-Board Flow Control of Inflatable Robots. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2021; 8:e2101941. [PMID: 34494725 PMCID: PMC8564437 DOI: 10.1002/advs.202101941] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 07/03/2021] [Indexed: 06/13/2023]
Abstract
Inflatable robots are becoming increasingly popular, especially in applications where safe interactions are a priority. However, designing multifunctional robots that can operate with a single pressure input is challenging. A potential solution is to couple inflatables with passive valves that can harness the flow characteristics to create functionality. In this study, simple, easy to fabricate, lightweight, and inexpensive mechanical valves are presented that harness viscous flow and snapping arch principles. The mechanical valves can be fully integrated on-board, enabling the control of the incoming airflow to realize multifunctional robots that operate with a single pressure input, with no need for electronic components, cables, or wires. By means of three robotic demos and guided by a numerical model, the capabilities of the valves are demonstrated and optimal input profiles are identified to achieve prescribed functionalities. The study enriches the array of available mechanical valves for inflatable robots and enables new strategies to realize multifunctional robots with on-board flow control.
Collapse
|
33
|
[ Robotic arm control system based on augmented reality brain-computer interface and computer vision]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2021; 38:483-491. [PMID: 34180193 DOI: 10.7507/1001-5515.202011039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Brain-computer interface (BCI) has great potential to replace lost upper limb function. Thus, there has been great interest in the development of BCI-controlled robotic arm. However, few studies have attempted to use noninvasive electroencephalography (EEG)-based BCI to achieve high-level control of a robotic arm. In this paper, a high-level control architecture combining augmented reality (AR) BCI and computer vision was designed to control a robotic arm for performing a pick and place task. A steady-state visual evoked potential (SSVEP)-based BCI paradigm was adopted to realize the BCI system. Microsoft's HoloLens was used to build an AR environment and served as the visual stimulator for eliciting SSVEPs. The proposed AR-BCI was used to select the objects that need to be operated by the robotic arm. The computer vision was responsible for providing the location, color and shape information of the objects. According to the outputs of the AR-BCI and computer vision, the robotic arm could autonomously pick the object and place it to specific location. Online results of 11 healthy subjects showed that the average classification accuracy of the proposed system was 91.41%. These results verified the feasibility of combing AR, BCI and computer vision to control a robotic arm, and are expected to provide new ideas for innovative robotic arm control approaches.
Collapse
|
34
|
A brain-actuated robotic arm system using non-invasive hybrid brain-computer interface and shared control strategy. J Neural Eng 2021; 18. [PMID: 33862607 DOI: 10.1088/1741-2552/abf8cb] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 04/16/2021] [Indexed: 01/20/2023]
Abstract
Objective.The electroencephalography (EEG)-based brain-computer interfaces (BCIs) have been used in the control of robotic arms. The performance of non-invasive BCIs may not be satisfactory due to the poor quality of EEG signals, so the shared control strategies were tried as an alternative solution. However, most of the existing shared control methods set the arbitration rules manually, which highly depended on the specific tasks and developer's experience. In this study, we proposed a novel shared control model that automatically optimized the control commands in a dynamical way based on the context in real-time control. Besides, we employed the hybrid BCI to better allocate commands with multiple functions. The system allowed non-invasive BCI users to manipulate a robotic arm moving in a three-dimensional (3D) space and complete a pick-place task of multiple objects.Approach.Taking the scene information obtained by computer vision as a knowledge base, a machine agent was designed to infer the user's intention and generate automatic commands. Based on the inference confidence and user's characteristic, the proposed shared control model fused the machine autonomy and human intention dynamically for robotic arm motion optimization during the online control. In addition, we introduced a hybrid BCI scheme that applied steady-state visual evoked potentials and motor imagery to the divided primary and secondary BCI interfaces to better allocate the BCI resources (e.g. decoding computing power, screen occupation) and realize the multi-dimensional control of the robotic arm.Main results.Eleven subjects participated in the online experiments of picking and placing five objects that scattered at different positions in a 3D workspace. The results showed that most of the subjects could control the robotic arm to complete accurate and robust picking task with an average success rate of approximately 85% under the shared control strategy, while the average success rate of placing task controlled by pure BCI was 50% approximately.Significance.In this paper, we proposed a novel shared controller for motion automatic optimization, together with a hybrid BCI control scheme that allocated paradigms according to the importance of commands to realize multi-dimensional and effective control of a robotic arm. Our study indicated that the shared control strategy with hybrid BCI could greatly improve the performance of the brain-actuated robotic arm system.
Collapse
|
35
|
Intelligent Trajectory Tracking Behavior of a Multi-Joint Robotic Arm via Genetic-Swarm Optimization for the Inverse Kinematic Solution. SENSORS 2021; 21:s21093171. [PMID: 34063574 PMCID: PMC8124729 DOI: 10.3390/s21093171] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 04/25/2021] [Accepted: 04/29/2021] [Indexed: 11/26/2022]
Abstract
It is necessary to control the movement of a complex multi-joint structure such as a robotic arm in order to reach a target position accurately in various applications. In this paper, a hybrid optimal Genetic–Swarm solution for the Inverse Kinematic (IK) solution of a robotic arm is presented. Each joint is controlled by Proportional–Integral–Derivative (PID) controller optimized with the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), called Genetic–Swarm Optimization (GSO). GSO solves the IK of each joint while the dynamic model is determined by the Lagrangian. The tuning of the PID is defined as an optimization problem and is solved by PSO for the simulated model in a virtual environment. A Graphical User Interface has been developed as a front-end application. Based on the combination of hybrid optimal GSO and PID control, it is ascertained that the system works efficiently. Finally, we compare the hybrid optimal GSO with conventional optimization methods by statistic analysis.
Collapse
|
36
|
Neuromorphic NEF-Based Inverse Kinematics and PID Control. Front Neurorobot 2021; 15:631159. [PMID: 33613225 PMCID: PMC7887770 DOI: 10.3389/fnbot.2021.631159] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 01/05/2021] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic implementation of robotic control has been shown to outperform conventional control paradigms in terms of robustness to perturbations and adaptation to varying conditions. Two main ingredients of robotics are inverse kinematic and Proportional-Integral-Derivative (PID) control. Inverse kinematics is used to compute an appropriate state in a robot's configuration space, given a target position in task space. PID control applies responsive correction signals to a robot's actuators, allowing it to reach its target accurately. The Neural Engineering Framework (NEF) offers a theoretical framework for a neuromorphic encoding of mathematical constructs with spiking neurons for the implementation of functional large-scale neural networks. In this work, we developed NEF-based neuromorphic algorithms for inverse kinematics and PID control, which we used to manipulate 6 degrees of freedom robotic arm. We used online learning for inverse kinematics and signal integration and differentiation for PID, offering high performing and energy-efficient neuromorphic control. Algorithms were evaluated in simulation as well as on Intel's Loihi neuromorphic hardware.
Collapse
|
37
|
Spatial-temporal aspects of continuous EEG-based neurorobotic control. J Neural Eng 2020; 17. [PMID: 33049729 DOI: 10.1088/1741-2552/abc0b4] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 10/13/2020] [Indexed: 12/16/2022]
Abstract
OBJECTIVE The goal of this work is to identify the spatio-temporal facets of state-of-the-art electroencephalography (EEG)-based continuous neurorobotics that need to be addressed, prior to deployment in practical applications at home and in the clinic. APPROACH Nine healthy human subjects participated in five sessions of one-dimensional (1D) horizontal (LR), 1D vertical (UD) and two-dimensional (2D) neural tracking from EEG. Users controlled a robotic arm and virtual cursor to continuously track a Gaussian random motion target using EEG sensorimotor rhythm modulation via motor imagery (MI) commands. Continuous control quality was analyzed in the temporal and spatial domains separately. MAIN RESULTS Axis-specific errors during 2D tasks were significantly larger than during 1D counterparts. Fatigue rates were larger for control tasks with higher cognitive demand (LR, left- and right-hand MI) compared to those with lower cognitive demand (UD, both hands MI and rest). Additionally, robotic arm and virtual cursor control exhibited equal tracking error during all tasks. However, further spatial error analysis of 2D control revealed a significant reduction in tracking quality that was dependent on the visual interference of the physical device. In fact, robotic arm performance was significantly greater than that of virtual cursor control when the users' sightlines were not obstructed. SIGNIFICANCE This work emphasizes the need for practical interfaces to be designed around real-world tasks of increased complexity. Here, the dependence of control quality on cognitive task demand emphasizes the need for decoders that facilitate the translation of 1D task mastery to 2D control. When device footprint was accounted for, the introduction of a physical robotic arm improved control quality, likely due to increased user engagement. In general, this work demonstrates the need to consider both the physical footprint of devices, the complexity of training tasks, and the synergy of control strategies during the development of neurorobotic control.
Collapse
|
38
|
Abstract
This study presents the design of a pneumatic artificial muscle with integrated soft optical sensing for estimation of muscle contraction length and contraction force. Each optical sensor uses an light emitting diode (LED)-photodiode pair to measure the light reflected by a silicone diaphragm embedded in the muscle. One diaphragm is designed to respond primarily to changes in muscle pressure, whereas the other is designed to respond to changes in muscle length. Muscle sensors were calibrated by measuring muscle contraction force versus length for a range of fixed muscle pressures and then mapping optical sensor data to the corresponding length and force data. To evaluate sensorized muscle performance in a robotic system, two antagonistic muscle pairs were used to actuate a planar two-degree-of-freedom arm. In various static and dynamic tests, arm positions and forces were estimated from optical sensor measurements. Optical sensor estimates of static and dynamic end-effector position estimation yielded average errors of 1.3 and 1.1 cm, respectively. Optical sensor estimates of static and dynamic end-effector force yielded average total force errors of 0.16 and 0.12 N for maximum end-effector forces of 2.0 and 2.4 N, respectively.
Collapse
|
39
|
[Kinematics parameter identification and accuracy evaluation method for neurosurgical robot]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2019; 36:994-1002. [PMID: 31875374 DOI: 10.7507/1001-5515.201810054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
The kinematic model parameter deviation is the main factor affecting the positioning accuracy of neurosurgical robots. To obtain more realistic kinematic model parameters, this paper proposes an automatic parameters identification and accuracy evaluation method. First, an identification equation contains all robot kinematics parameter was established. Second, a multiple-pivot strategy was proposed to find the relationship between end-effector and tracking marker. Then, the relative distance error and the inverse kinematic coincidence error were designed to evaluate the identification accuracy. Finally, an automatic robot parameter identification and accuracy evaluation system were developed. We tested our method on both laboratory prototypes and real neurosurgical robots. The results show that this method can realize the neurosurgical robot kinematics model parameters identification and evaluation stably and quickly. Using the identified parameters to control the robot can reduce the robot relative distance error by 33.96% and the inverse kinematics consistency error by 67.30%.
Collapse
|
40
|
An EEG-/EOG-Based Hybrid Brain-Computer Interface: Application on Controlling an Integrated Wheelchair Robotic Arm System. Front Neurosci 2019; 13:1243. [PMID: 31824245 PMCID: PMC6882933 DOI: 10.3389/fnins.2019.01243] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2019] [Accepted: 11/04/2019] [Indexed: 11/13/2022] Open
Abstract
Most existing brain-computer Interfaces (BCIs) are designed to control a single assistive device, such as a wheelchair, a robotic arm or a prosthetic limb. However, many daily tasks require combined functions which can only be realized by integrating multiple robotic devices. Such integration raises the requirement of the control accuracy and is more challenging to achieve a reliable control compared with the single device case. In this study, we propose a novel hybrid BCI with high accuracy based on electroencephalogram (EEG) and electrooculogram (EOG) to control an integrated wheelchair robotic arm system. The user turns the wheelchair left/right by performing left/right hand motor imagery (MI), and generates other commands for the wheelchair and the robotic arm by performing eye blinks and eyebrow raising movements. Twenty-two subjects participated in a MI training session and five of them completed a mobile self-drinking experiment, which was designed purposely with high accuracy requirements. The results demonstrated that the proposed hBCI could provide satisfied control accuracy for a system that consists of multiple robotic devices, and showed the potential of BCI-controlled systems to be applied in complex daily tasks.
Collapse
|
41
|
Reachy, a 3D-Printed Human-Like Robotic Arm as a Testbed for Human-Robot Control Strategies. Front Neurorobot 2019; 13:65. [PMID: 31474846 PMCID: PMC6703080 DOI: 10.3389/fnbot.2019.00065] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 07/29/2019] [Indexed: 11/13/2022] Open
Abstract
To this day, despite the increasing motor capability of robotic devices, elaborating efficient control strategies is still a key challenge in the field of humanoid robotic arms. In particular, providing a human “pilot” with efficient ways to drive such a robotic arm requires thorough testing prior to integration into a finished system. Additionally, when it is needed to preserve anatomical consistency between pilot and robot, such testing requires to employ devices showing human-like features. To fulfill this need for a biomimetic test platform, we present Reachy, a human-like life-scale robotic arm with seven joints from shoulder to wrist. Although Reachy does not include a poly-articulated hand and is therefore more suitable for studying reaching than manipulation, a robotic hand prototype from available third-party projects could be integrated to it. Its 3D-printed structure and off-the-shelf actuators make it inexpensive relatively to the price of an industrial-grade robot. Using an open-source architecture, its design makes it broadly connectable and customizable, so it can be integrated into many applications. To illustrate how Reachy can connect to external devices, this paper presents several proofs of concept where it is operated with various control strategies, such as tele-operation or gaze-driven control. In this way, Reachy can help researchers to explore, develop and test innovative control strategies and interfaces on a human-like robot.
Collapse
|
42
|
Revision Analysis of Robotic Arm-Assisted and Manual Unicompartmental Knee Arthroplasty. J Arthroplasty 2019; 34:926-931. [PMID: 31010509 DOI: 10.1016/j.arth.2019.01.018] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/22/2018] [Accepted: 01/09/2019] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND The purpose of this study was to evaluate hospital admissions for revision surgeries associated with robotic arm-assisted unicompartmental knee arthroplasty (rUKA) vs manually instrumented UKA (mUKA) procedures. METHODS Patients ≥18 years of age who received either a mUKA or a rUKA procedure were candidates for inclusion and were identified by the presence of appropriate billing codes. Procedures performed between March 1, 2013 and July 31, 2015 were used to calculate the rate of surgical revisions occurring within 24-months of the index procedure. Following propensity matching, 246 rUKA and 492 mUKA patients were included. Revision rates and the associated costs were compared between the two cohorts. The Mann-Whitney U test was used to compare continuous variables, and Fisher's exact tests was used to analyze discrete categorical variables. RESULTS At 24 months after the primary UKA procedure, patients who underwent rUKA had fewer revision procedures (0.81% [2/246] vs 5.28% [26/492]; P = .002), shorter mean length of stay (2.00 vs 2.33 days; P > .05), and incurred lower mean costs for the index stay plus revisions ($26,001 vs $27,915; P > .05) than mUKA patients. Length of stay at index and index costs were also lower for rUKA patients (1.77 vs 2.02 days; P = .0047) and ($25,786 vs $26,307; P > .05). CONCLUSIONS The study results demonstrate that patients who underwent rUKA had fewer revision procedures, shorter length of stay, and incurred lower mean costs (although not statistically different) during the index admission and at 24 months postoperatively. These results could be important for payers as the prevalence of end-stage knee osteoarthritis increases alongside the demand for cost-efficient treatments.
Collapse
|
43
|
Frameless robot-assisted pallidal deep brain stimulation surgery in pediatric patients with movement disorders: precision and short-term clinical results. J Neurosurg Pediatr 2018; 22:416-425. [PMID: 30028274 DOI: 10.3171/2018.5.peds1814] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE The purpose of this study was to verify the safety and accuracy of the Neuromate stereotactic robot for use in deep brain stimulation (DBS) electrode implantation for the treatment of hyperkinetic movement disorders in childhood and describe the authors' initial clinical results. METHODS A prospective evaluation of pediatric patients with dystonia and other hyperkinetic movement disorders was carried out during the 1st year after the start-up of a pediatric DBS unit in Barcelona. Electrodes were implanted bilaterally in the globus pallidus internus (GPi) using the Neuromate robot without the stereotactic frame. The authors calculated the distances between the electrodes and their respective planned trajectories, merging the postoperative CT with the preoperative plan using VoXim software. Clinical outcome was monitored using validated scales for dystonia and myoclonus preoperatively and at 1 month and 6 months postoperatively and by means of a quality-of-life questionnaire for children, administered before surgery and at 6 months' follow-up. We also recorded complications derived from the implantation technique, "hardware," and stimulation. RESULTS Six patients aged 7 to 16 years and diagnosed with isolated dystonia ( DYT1 negative) (3 patients), choreo-dystonia related to PDE2A mutation (1 patient), or myoclonus-dystonia syndrome SGCE mutations (2 patients) were evaluated during a period of 6 to 19 months. The average accuracy in the placement of the electrodes was 1.24 mm at the target point. At the 6-month follow-up, patients showed an improvement in the motor (65%) and functional (48%) components of the Burke-Fahn-Marsden Dystonia Rating Scale. Patients with myoclonus and SGCE mutations also showed an improvement in action myoclonus (95%-100%) and in functional tests (50%-75%) according to the Unified Motor-Rating Scale. The Neuro-QOL score revealed inconsistent results, with improvement in motor function and social relationships but worsening in anxiety, cognitive function, and pain. The only surgical complication was medial displacement of the first electrode, which limited intensity of stimulation in the lower contacts, in one case. CONCLUSIONS The Neuromate stereotactic robot is an accurate and safe tool for the placement of GPi electrodes in children with hyperkinetic movement disorders.
Collapse
|
44
|
Body ownership and agency altered by an electromyographically controlled robotic arm. ROYAL SOCIETY OPEN SCIENCE 2018; 5:172170. [PMID: 29892405 PMCID: PMC5990842 DOI: 10.1098/rsos.172170] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Accepted: 04/04/2018] [Indexed: 06/08/2023]
Abstract
Understanding how we consciously experience our bodies is a fundamental issue in cognitive neuroscience. Two fundamental components of this are the sense of body ownership (the experience of the body as one's own) and the sense of agency (the feeling of control over one's bodily actions). These constructs have been used to investigate the incorporation of prostheses. To date, however, no evidence has been provided showing whether representations of ownership and agency in amputees are altered when operating a robotic prosthesis. Here we investigated a robotic arm using myoelectric control, for which the user varied the joint position continuously, in a rubber hand illusion task. Fifteen able-bodied participants and three trans-radial amputees were instructed to contract their wrist flexors/extensors alternately, and to watch the robotic arm move. The sense of ownership in both groups was extended to the robotic arm when the wrists of the real and robotic arm were flexed/extended synchronously, with the effect being smaller when they moved in opposite directions. Both groups also experienced a sense of agency over the robotic arm. These results suggest that these experimental settings induced successful incorporation of the prosthesis, at least for the amputees who took part in the present study.
Collapse
|
45
|
Unisurgeon' uniportal video-assisted thoracoscopic surgery lobectomy. J Vis Surg 2017; 3:163. [PMID: 29302439 DOI: 10.21037/jovs.2017.10.07] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 10/10/2017] [Indexed: 11/06/2022]
Abstract
The video-assisted thoracoscopic surgery (VATS) for major pulmonary resections has evolved in a period of only 7 years from 3-4 incisions to a single incision approach. However, Uniportal VATS approach is different from other forms of minimally invasive thoracic surgery, and the technique of lung exposure and stapler insertion through a single hole should be learned step by step. The main advances of uniportal VATS during the last years are related to improvements in surgical technique, evolving to a concept of "advanced VATS instrumentation", and implementation of new technology. One recent advance in uniportal VATS is the possibility of using a robotic or pneumatic articulated arm that holds the camera stable and no needs a surgical assistant. This is called "unisurgeon uniportal VATS" in where the surgeon has more freedom of movements and eliminates the fatigue of assistant holding the camera. We are still in the beginning of the "unisurgeon era" that probably will be more popular in the next coming years thanks to the Implementation of wireless cameras and graspers by means of magnetic control.
Collapse
|
46
|
Performance and Usability of Various Robotic Arm Control Modes from Human Force Signals. Front Neurorobot 2017; 11:55. [PMID: 29118699 PMCID: PMC5660981 DOI: 10.3389/fnbot.2017.00055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 09/27/2017] [Indexed: 11/16/2022] Open
Abstract
Elaborating an efficient and usable mapping between input commands and output movements is still a key challenge for the design of robotic arm prostheses. In order to address this issue, we present and compare three different control modes, by assessing them in terms of performance as well as general usability. Using an isometric force transducer as the command device, these modes convert the force input signal into either a position or a velocity vector, whose magnitude is linearly or quadratically related to force input magnitude. With the robotic arm from the open source 3D-printed Poppy Humanoid platform simulating a mobile prosthesis, an experiment was carried out with eighteen able-bodied subjects performing a 3-D target-reaching task using each of the three modes. The subjects were given questionnaires to evaluate the quality of their experience with each mode, providing an assessment of their global usability in the context of the task. According to performance metrics and questionnaire results, velocity control modes were found to perform better than position control mode in terms of accuracy and quality of control as well as user satisfaction and comfort. Subjects also seemed to favor quadratic velocity control over linear (proportional) velocity control, even if these two modes did not clearly distinguish from one another when it comes to performance and usability assessment. These results highlight the need to take into account user experience as one of the key criteria for the design of control modes intended to operate limb prostheses.
Collapse
|
47
|
Technological aids in uniportal video-assisted thoracoscopic surgery. J Vis Surg 2017; 3:29. [PMID: 29078592 DOI: 10.21037/jovs.2017.01.05] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Accepted: 12/11/2016] [Indexed: 11/06/2022]
Abstract
With the evolution of uniportal video-assisted thoracoscopic surgery (VATS), the technological aids have come to help skill surgeons to improve the results in thoracic surgery and feasible to perform a complex surgery. The technological aids are divided into three important groups, which make surgical steps easy to perform, besides reducing surgical time and surgical accidents in the hands of experienced surgeons. The groups are: (I) conventional thoracoscopic instruments; (II) sealing devices using in uniportal VATS; (III) high definition cameras, robotic arms prototype and the future robotic aids for uniportal VATS surgery. Uniportal VATS is an example of the continuing search for methods that aim to provide the patient a surgical cure of the disease with the lowest morbidity. That is the reason companies are creating more and new technologies, but the surgeon have to choose properly and to know how, when and where is the moment to use each new aids to avoid mistakes. The future of the thoracic surgery is based on evolution of surgical procedures and innovations to try to reduce even more the surgical and anesthetic trauma. This article summarizes the technological aids to improve and help a thoracoscopics surgeons perform a uniportal VATS feasible and safe.
Collapse
|
48
|
Neurosurgical robotic arm drilling navigation system. Int J Med Robot 2016; 13. [PMID: 27910205 DOI: 10.1002/rcs.1790] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Revised: 10/22/2016] [Accepted: 10/23/2016] [Indexed: 11/11/2022]
Abstract
BACKGROUND The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. METHODS The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. RESULTS The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). CONCLUSIONS The system may be useful for neurosurgical robotic arm drilling navigation.
Collapse
|
49
|
[Brachytherapy needle steering using intra-tissue real-time ultrasound 3d visualization]. UROLOGIIA (MOSCOW, RUSSIA : 1999) 2016:95-99. [PMID: 28248051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
AIM To explore in a model experiment the capability of the developed software for 3D- ultrasound imaging of tumors in the pelvic tissue phantom to steer a brachytherapy needle using 6-axis robotic arm. MATERIALS AND METHODS The experiment employed a six-axis robotic arm with a device for moving the needle, a phantom with a tumor model and ultrasound scanner with biplane transducer. Controlled by the developed software, the robotic arm automatically inserted the needle in the phantom. At all stages of inserting the needle, its position in the phantom was continuously tracked using data obtained by the ultrasonic transducer. RESULTS The software was developed and tested for intra-tissue ultrasound imaging to steer a brachytherapy needle using US-scanner coupled with the robotic system providing 3D tumor modeling within the pelvic tissue phantom. In the course of the operation, the program corrects the existing model using current US images considering any shifting and swelling of the prostate. CONCLUSION The model experiment proved the operational capability of the proposed method of 3D tumor modeling within the pelvic tissue phantom and tracking needle movement in the phantom in real time using US scanner coupled with a robotic system for brachytherapy. Further development of the software, providing ultrasound image processing and automatically correcting the brachytherapy needle trajectory, will complete preclinical studies of a robotic arm and warrant clinical trials.
Collapse
|
50
|
Innovation in robotic surgery: the Indian scenario. J Minim Access Surg 2015; 11:106-10. [PMID: 25598610 PMCID: PMC4290110 DOI: 10.4103/0972-9941.147724] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Accepted: 12/28/2014] [Indexed: 11/17/2022] Open
Abstract
Robotics is the science. In scientific words a “Robot” is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm — A robot — In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.
Collapse
|