1
|
Fang H, Xie X, Jing K, Liu S, Chen A, Wu D, Zhang L, Tian H. A Flexible Dual-Mode Photodetector for Human-Machine Collaborative IR Imaging. NANO-MICRO LETTERS 2025; 17:229. [PMID: 40272611 PMCID: PMC12021759 DOI: 10.1007/s40820-025-01758-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2025] [Accepted: 03/30/2025] [Indexed: 04/27/2025]
Abstract
Photothermoelectric (PTE) photodetectors with self-powered and uncooled advantages have attracted much interest due to the wide application prospects in the military and civilian fields. However, traditional PTE photodetectors lack of mechanical flexibility and cannot operate independently without the test instrument. Herein, we present a flexible PTE photodetector capable of dual-mode output, combining electrical and optical signal generation for enhanced functionality. Using solution processing, high-quality MXene thin films are assembled on asymmetric electrodes as the photosensitive layer. The geometrically asymmetric electrode design significantly enhances the responsivity, achieving 0.33 mA W-1 under infrared illumination, twice that of the symmetrical configuration. This improvement stems from optimized photothermal conversion and an expanded temperature gradient. The PTE device maintains stable performance after 300 bending cycles, demonstrating excellent flexibility. A new energy conversion pathway has been established by coupling the photothermal conversion of MXene with thermochromic composite materials, leading to a real-time visualization of invisible infrared radiation. Leveraging this functionality, we demonstrate the first human-machine collaborative infrared imaging system, wherein the dual-mode photodetector arrays synchronously generate human-readable pattern and machine-readable pattern. Our study not only provides a new solution for functional integration of flexible photodetectors, but also sets a new benchmark for human-machine collaborative optoelectronics.
Collapse
Affiliation(s)
- Huajing Fang
- Center for Advancing Materials Performance From the Nanoscale (CAMP‑Nano), State Key Laboratory for Mechanical Behavior of Materials, Xi'an Jiaotong University, Xi'an, 710049, People's Republic of China.
| | - Xinxing Xie
- Center for Advancing Materials Performance From the Nanoscale (CAMP‑Nano), State Key Laboratory for Mechanical Behavior of Materials, Xi'an Jiaotong University, Xi'an, 710049, People's Republic of China
| | - Kai Jing
- Center for Advancing Materials Performance From the Nanoscale (CAMP‑Nano), State Key Laboratory for Mechanical Behavior of Materials, Xi'an Jiaotong University, Xi'an, 710049, People's Republic of China
| | - Shaojie Liu
- Center for Advancing Materials Performance From the Nanoscale (CAMP‑Nano), State Key Laboratory for Mechanical Behavior of Materials, Xi'an Jiaotong University, Xi'an, 710049, People's Republic of China
| | - Ainong Chen
- Center for Advancing Materials Performance From the Nanoscale (CAMP‑Nano), State Key Laboratory for Mechanical Behavior of Materials, Xi'an Jiaotong University, Xi'an, 710049, People's Republic of China
| | - Daixuan Wu
- School of Integrated Circuits and Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, 100084, People's Republic of China.
| | - Liyan Zhang
- Center for Advancing Materials Performance From the Nanoscale (CAMP‑Nano), State Key Laboratory for Mechanical Behavior of Materials, Xi'an Jiaotong University, Xi'an, 710049, People's Republic of China
| | - He Tian
- School of Integrated Circuits and Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, 100084, People's Republic of China.
| |
Collapse
|
2
|
Zare M, Kebria PM, Khosravi A, Nahavandi S. A Survey of Imitation Learning: Algorithms, Recent Developments, and Challenges. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:7173-7186. [PMID: 39024072 DOI: 10.1109/tcyb.2024.3395626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
In recent years, the development of robotics and artificial intelligence (AI) systems has been nothing short of remarkable. As these systems continue to evolve, they are being utilized in increasingly complex and unstructured environments, such as autonomous driving, aerial robotics, and natural language processing. As a consequence, programming their behaviors manually or defining their behavior through the reward functions [as done in reinforcement learning (RL)] has become exceedingly difficult. This is because such environments require a high degree of flexibility and adaptability, making it challenging to specify an optimal set of rules or reward signals that can account for all the possible situations. In such environments, learning from an expert's behavior through imitation is often more appealing. This is where imitation learning (IL) comes into play - a process where desired behavior is learned by imitating an expert's behavior, which is provided through demonstrations.This article aims to provide an introduction to IL and an overview of its underlying assumptions and approaches. It also offers a detailed description of recent advances and emerging areas of research in the field. Additionally, this article discusses how researchers have addressed common challenges associated with IL and provides potential directions for future research. Overall, the goal of this article is to provide a comprehensive guide to the growing field of IL in robotics and AI.
Collapse
|
3
|
Faroni M, Umbrico A, Beschi M, Orlandini A, Cesta A, Pedrocchi N. Optimal Task and Motion Planning and Execution for Multiagent Systems in Dynamic Environments. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3366-3377. [PMID: 37053056 DOI: 10.1109/tcyb.2023.3263380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Combining symbolic and geometric reasoning in multiagent systems is a challenging task that involves planning, scheduling, and synchronization problems. Existing works overlooked the variability of task duration and geometric feasibility intrinsic to these systems because of the interaction between agents and the environment. We propose a combined task and motion planning approach to optimize the sequencing, assignment, and execution of tasks under temporal and spatial variability. The framework relies on decoupling tasks and actions, where an action is one possible geometric realization of a symbolic task. At the task level, timeline-based planning deals with temporal constraints, duration variability, and synergic assignment of tasks. At the action level, online motion planning plans for the actual movements dealing with environmental changes. We demonstrate the approach's effectiveness in a collaborative manufacturing scenario, in which a robotic arm and a human worker shall assemble a mosaic in the shortest time possible. Compared with existing works, our approach applies to a broader range of applications and reduces the execution time of the process.
Collapse
|
4
|
Varrecchia T, Chini G, Tarbouriech S, Navarro B, Cherubini A, Draicchio F, Ranavolo A. The assistance of BAZAR robot promotes improved upper limb motor coordination in workers performing an actual use-case manual material handling. ERGONOMICS 2023; 66:1950-1967. [PMID: 36688620 DOI: 10.1080/00140139.2023.2172213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 01/14/2023] [Indexed: 06/17/2023]
Abstract
This study aims at evaluating upper limb muscle coordination and activation in workers performing an actual use-case manual material handling (MMH). The study relies on the comparison of the workers' muscular activity while they perform the task, with and without the help of a dual-arm cobot (BAZAR). Eleven participants performed the task and the flexors and extensors muscles of the shoulder, elbow, wrist, and trunk joints were recorded using bipolar electromyography. The results showed that, when the particular MMH was carried out with BAZAR, both upper limb and trunk muscular co-activation and activation were decreased. Therefore, technologies that enable human-robot collaboration (HRC), which share a workspace with employees, relieve employees of external loads and enhance the effectiveness and calibre of task completion. Additionally, these technologies improve the worker's coordination, lessen the physical effort required to interact with the robot, and have a favourable impact on his or her physiological motor strategy. Practitioner summary: Upper limb and trunk muscle co-activation and activation is reduced when a specific manual material handling was performed with a cobot than without it. By improving coordination, reducing physical effort, and changing motor strategy, cobots could be proposed as an ergonomic intervention to lower workers' biomechanical risk in industry.
Collapse
Affiliation(s)
- Tiwana Varrecchia
- Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, INAIL, Rome, Italy
| | - Giorgia Chini
- Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, INAIL, Rome, Italy
| | | | | | | | - Francesco Draicchio
- Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, INAIL, Rome, Italy
| | - Alberto Ranavolo
- Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, INAIL, Rome, Italy
| |
Collapse
|
5
|
Li G, Li Z, Su CY, Xu T. Active Human-Following Control of an Exoskeleton Robot With Body Weight Support. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7367-7379. [PMID: 37030717 DOI: 10.1109/tcyb.2023.3253181] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article presents an active human-following control of the lower limb exoskeleton for gait training. First, to improve safety, considering the human balance, the OpenPose-based visual feedback is used to estimate the individual's pose, then, the active human-following algorithm is proposed for the exoskeleton robot to achieve the body weight support and active human-following. Second, taking the human's intention and voluntary efforts into account, we develop a long short-term memory (LSTM) network to extract surface electromyography (sEMG) to build the estimation model of joints' angles, that is, the multichannel sEMG signals can be correlated with flexion/extension (FE) joints' angles of the human lower limb. Finally, to make the robot motion adapt to the locomotion of subjects under uncertain nonlinear dynamics, an adaptive control strategy is designed to drive the exoskeleton robot to track the desired locomotion trajectories stably. To verify the effectiveness of the proposed control framework, several recruited subjects participated in the experiments. Experimental results show that the proposed joints' angles estimation model based on the LSTM network has a higher estimation accuracy and predicted performance compared with the existing deep neural network, and good simultaneous locomotion tracking performance is achieved by the designed control strategy, which indicates that the proposed control can assist subjects to perform gait training effectively.
Collapse
|
6
|
Cherubini A, Navarro B, Passama R, Tarbouriech S, Elprama SA, Jacobs A, Niehaus S, Wischniewski S, Tönis FJ, Siahaya PL, Chini G, Varrecchia T, Ranavolo A. Interdisciplinary evaluation of a robot physically collaborating with workers. PLoS One 2023; 18:e0291410. [PMID: 37819889 PMCID: PMC10566690 DOI: 10.1371/journal.pone.0291410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 08/29/2023] [Indexed: 10/13/2023] Open
Abstract
Collaborative Robots-CoBots-are emerging as a promising technological aid for workers. To date, most CoBots merely share their workspace or collaborate without contact, with their human partners. We claim that robots would be much more beneficial if they physically collaborated with the worker, on high payload tasks. To move high payloads, while remaining safe, the robot should use two or more lightweight arms. In this work, we address the following question: to what extent can robots help workers in physical human-robot collaboration tasks? To find an answer, we have gathered an interdisciplinary group, spanning from an industrial end user to cognitive ergonomists, and including biomechanicians and roboticists. We drew inspiration from an industrial process realized repetitively by workers of the SME HANKAMP (Netherlands). Eleven participants replicated the process, without and with the help of a robot. During the task, we monitored the participants' biomechanical activity. After the task, the participants completed a survey with usability and acceptability measures; seven workers of the SME completed the same survey. The results of our research are the following. First, by applying-for the first time in collaborative robotics-Potvin's method, we show that the robot substantially reduces the participants' muscular effort. Second: we design and present an unprecedented method for measuring the robot reliability and reproducibility in collaborative scenarios. Third: by correlating the worker's effort with the power measured by the robot, we show that the two agents act in energetic synergy. Fourth: the participant's increasing level of experience with robots shifts his/her focus from the robot's overall functionality towards finer expectations. Last but not least: workers and participants are willing to work with the robot and think it is useful.
Collapse
Affiliation(s)
| | | | | | | | | | - An Jacobs
- IMEC-SMIT-Vrije Universiteit Brussel, Brussels, Belgium
| | - Susanne Niehaus
- Federal Institute of Occupational Safety and Health, Dortmund, Germany
| | | | | | | | | | | | | |
Collapse
|
7
|
Rayyes R. Intrinsic motivation learning for real robot applications. Front Robot AI 2023; 10:1102438. [PMID: 36845331 PMCID: PMC9950409 DOI: 10.3389/frobt.2023.1102438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 01/13/2023] [Indexed: 02/12/2023] Open
Affiliation(s)
- Rania Rayyes
- Institut für Fördertechnik und Logistiksysteme, Karlsruher Institut für Technologie, Karlsruhe, Germany
| |
Collapse
|
8
|
Orsag L, Stipancic T, Koren L. Towards a Safe Human-Robot Collaboration Using Information on Human Worker Activity. SENSORS (BASEL, SWITZERLAND) 2023; 23:1283. [PMID: 36772323 PMCID: PMC9920522 DOI: 10.3390/s23031283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 01/04/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
Most industrial workplaces involving robots and other apparatus operate behind the fences to remove defects, hazards, or casualties. Recent advancements in machine learning can enable robots to co-operate with human co-workers while retaining safety, flexibility, and robustness. This article focuses on the computation model, which provides a collaborative environment through intuitive and adaptive human-robot interaction (HRI). In essence, one layer of the model can be expressed as a set of useful information utilized by an intelligent agent. Within this construction, a vision-sensing modality can be broken down into multiple layers. The authors propose a human-skeleton-based trainable model for the recognition of spatiotemporal human worker activity using LSTM networks, which can achieve a training accuracy of 91.365%, based on the InHARD dataset. Together with the training results, results related to aspects of the simulation environment and future improvements of the system are discussed. By combining human worker upper body positions with actions, the perceptual potential of the system is increased, and human-robot collaboration becomes context-aware. Based on the acquired information, the intelligent agent gains the ability to adapt its behavior according to its dynamic and stochastic surroundings.
Collapse
|
9
|
Yu J, Gao H, Zhou D, Liu J, Gao Q, Ju Z. Deep Temporal Model-Based Identity-Aware Hand Detection for Space Human-Robot Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13738-13751. [PMID: 34673499 DOI: 10.1109/tcyb.2021.3114031] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Hand detection is a crucial technology for space human-robot interaction (SHRI), and the awareness of hand identities is particularly critical. However, most advanced works have three limitations: 1) the low detection accuracy of small-size objects; 2) insufficient temporal feature modeling between frames in videos; and 3) the inability of real-time detection. In the article, a temporal detector (called TA-RSSD) is proposed based on the SSD and spatiotemporal long short-term memory (ST-LSTM) for real-time detection in SHRI applications. Next, based on the online tubelet analysis, a real-time identity-awareness module is designed for multiple hand object identification. Several notable properties are described as follows: 1) the hybrid structure of the Resnet-101 and the SSD improves the detection accuracy of small objects; 2) three-level feature pyramidal structure retains rich semantic information without losing detailed information; 3) a group of the redesigned temporal attentional LSTM (TA-LSTM) is utilized for three-level feature map modeling, which effectively achieves background suppression and scale suppression; 4) low-level attention maps are used to eliminate in-class similarity between hand objects, which improves the accuracy of identity awareness; and 5) a novel association training scheme enhances the temporal coherence between frames. The proposed model is evaluated on the SHRI-VID dataset (collected according to the task requirements), the AU-AIR dataset, and the ImageNet-VID benchmark. Extensive ablation studies and comparisons on detection and identity-awareness capacities show the superiority of the proposed model. Finally, a set of actual testing is conducted on a space robot, and the results show that the proposed model achieves a real-time speed and high accuracy.
Collapse
|
10
|
Qiao H, Chen J, Huang X. A Survey of Brain-Inspired Intelligent Robots: Integration of Vision, Decision, Motion Control, and Musculoskeletal Systems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11267-11280. [PMID: 33909584 DOI: 10.1109/tcyb.2021.3071312] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Current robotic studies are focused on the performance of specific tasks. However, such tasks cannot be generalized, and some special tasks, such as compliant and precise manipulation, fast and flexible response, and deep collaboration between humans and robots, cannot be realized. Brain-inspired intelligent robots imitate humans and animals, from inner mechanisms to external structures, through an integration of visual cognition, decision making, motion control, and musculoskeletal systems. This kind of robot is more likely to realize the functions that current robots cannot realize and become human friends. With the focus on the development of brain-inspired intelligent robots, this article reviews cutting-edge research in the areas of brain-inspired visual cognition, decision making, musculoskeletal robots, motion control, and their integration. It aims to provide greater insight into brain-inspired intelligent robots and attracts more attention to this field from the global research community.
Collapse
|
11
|
Popov D, Klimchik A, Pashkevich A. Robustness of Interaction Parameters Identification Technique for Collaborative Robots. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3188886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
12
|
Zhao J, Yu Y, Wang X, Ma S, Sheng X, Zhu X. A musculoskeletal model driven by muscle synergy-derived excitations for hand and wrist movements. J Neural Eng 2022; 19. [PMID: 34986472 DOI: 10.1088/1741-2552/ac4851] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 01/05/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Musculoskeletal model (MM) driven by electromyography (EMG) signals has been identified as a promising approach to predicting human motions in the control of prostheses and robots. However, muscle excitations in MMs are generally derived from the EMG signals of the targeted sensor covering the muscle, inconsistent with the fact that signals of a sensor are from multiple muscles considering signal crosstalk in actual situation. To identify more accurate muscle excitations for MM in the presence of crosstalk, we proposed a novel excitation-extracting method inspired by muscle synergy for simultaneously estimating hand and wrist movements. APPROACH Muscle excitations were firstly extracted using a two-step muscle synergy-derived method. Specifically, we calculated subject-specific muscle weighting matrix and corresponding profiles according to contributions of different muscles for movements derived from synergistic motion relation. Then, the improved excitations were used to simultaneously estimate hand and wrist movements through musculoskeletal modeling. Moreover, the offline comparison among the proposed method, traditional MM and regression methods, and an online test of the proposed method were conducted. MAIN RESULTS The offline experiments demonstrated that the proposed approach outperformed the EMG envelope-driven MM and three regression models with higher R and lower NRMSE. Furthermore, the comparison of excitations of two MMs validated the effectiveness of the proposed approach in extracting muscle excitations in the presence of crosstalk. The online test further indicated the superior performance of the proposed method than the MM driven by EMG envelopes. SIGNIFICANCE The proposed excitation-extracting method identified more accurate neural commands for MMs, providing a promising approach in rehabilitation and robot control to model the transformation from surface EMG to joint kinematics.
Collapse
Affiliation(s)
- Jiamin Zhao
- School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, China, Shanghai, 200240, CHINA
| | - Yang Yu
- Shanghai Jiao Tong University State Key Laboratory of Mechanical System and Vibration, 800 Dongchuan RD. Minhang District, Shanghai, 200240, CHINA
| | - Xu Wang
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, China, Shanghai, 200240, CHINA
| | - Shihan Ma
- School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, China, Shanghai, 200240, CHINA
| | - Xinjun Sheng
- School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, China, Shanghai, 200240, CHINA
| | - Xiangyang Zhu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, China, Shanghai, 200240, CHINA
| |
Collapse
|
13
|
Pose control of constrained redundant arm using recurrent neural networks and one-iteration computing algorithm. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.108007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
14
|
Connan M, Sierotowicz M, Henze B, Porges O, Albu-Schaeffer A, Roa M, Castellini C. Learning to teleoperate an upper-limb assistive humanoid robot for bimanual daily-living tasks. Biomed Phys Eng Express 2021; 8. [PMID: 34757953 DOI: 10.1088/2057-1976/ac3881] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 11/10/2021] [Indexed: 11/12/2022]
Abstract
Objective.Bimanual humanoid platforms for home assistance are nowadays available, both as academic prototypes and commercially. Although they are usually thought of as daily helpers for non-disabled users, their ability to move around, together with their dexterity, makes them ideal assistive devices for upper-limb disabled persons, too. Indeed, teleoperating a bimanual robotic platform via muscle activation could revolutionize the way stroke survivors, amputees and patients with spinal injuries solve their daily home chores. Moreover, with respect to direct prosthetic control, teleoperation has the advantage of freeing the user from the burden of the prosthesis itself, overpassing several limitations regarding size, weight, or integration, and thus enables a much higher level of functionality.Approach.In this study, nine participants, two of whom suffer from severe upper-limb disabilities, teleoperated a humanoid assistive platform, performing complex bimanual tasks requiring high precision and bilateral arm/hand coordination, simulating home/office chores. A wearable body posture tracker was used for position control of the robotic torso and arms, while interactive machine learning applied to electromyography of the forearms helped the robot to build an increasingly accurate model of the participant's intent over time.Main results.All participants, irrespective of their disability, were uniformly able to perform the demanded tasks. Completion times, subjective evaluation scores, as well as energy- and time- efficiency show improvement over time on short and long term.Significance.This is the first time a hybrid setup, involving myoeletric and inertial measurements, is used by disabled people to teleoperate a bimanual humanoid robot. The proposed setup, taking advantage of interactive machine learning, is simple, non-invasive, and offers a new assistive solution for disabled people in their home environment. Additionnally, it has the potential of being used in several other applications in which fine humanoid robot control is required.
Collapse
Affiliation(s)
- Mathilde Connan
- Deutsches Zentrum fur Luft- und Raumfahrt Institut fur Robotik und Mechatronik, Muenchener Strasse 20, Oberpfaffenhofen-Wessling, 82234, GERMANY
| | - Marek Sierotowicz
- Deutsches Zentrum fur Luft- und Raumfahrt Institut fur Robotik und Mechatronik, Muenchener Strasse 20, Oberpfaffenhofen-Wessling, 82234, GERMANY
| | - Bernd Henze
- Deutsches Zentrum fur Luft- und Raumfahrt Institut fur Robotik und Mechatronik, Muenchener Strasse 20, Oberpfaffenhofen-Wessling, 82234, GERMANY
| | - Oliver Porges
- Deutsches Zentrum fur Luft- und Raumfahrt Institut fur Robotik und Mechatronik, Muenchener Strasse 20, Oberpfaffenhofen-Wessling, 82234, GERMANY
| | - Alin Albu-Schaeffer
- Deutsches Zentrum fur Luft- und Raumfahrt Institut fur Robotik und Mechatronik, Muenchener Strasse 20, Oberpfaffenhofen-Wessling, 82234, GERMANY
| | - Maximo Roa
- Deutsches Zentrum fur Luft- und Raumfahrt Institut fur Robotik und Mechatronik, Muenchener Strasse 20, Oberpfaffenhofen-Wessling, 82234, GERMANY
| | - Claudio Castellini
- Deutsches Zentrum fur Luft- und Raumfahrt DLR Institut fur Robotik und Mechatronik, Muenchener Strasse 20, Oberpfaffenhofen-Wessling, Bayern, 82234, GERMANY
| |
Collapse
|
15
|
Angleraud A, Mehman Sefat A, Netzev M, Pieters R. Coordinating Shared Tasks in Human-Robot Collaboration by Commands. Front Robot AI 2021; 8:734548. [PMID: 34738018 PMCID: PMC8560701 DOI: 10.3389/frobt.2021.734548] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 10/01/2021] [Indexed: 11/24/2022] Open
Abstract
Human-robot collaboration is gaining more and more interest in industrial settings, as collaborative robots are considered safe and robot actions can be programmed easily by, for example, physical interaction. Despite this, robot programming mostly focuses on automated robot motions and interactive tasks or coordination between human and robot still requires additional developments. For example, the selection of which tasks or actions a robot should do next might not be known beforehand or might change at the last moment. Within a human-robot collaborative setting, the coordination of complex shared tasks, is therefore more suited to a human, where a robot would act upon requested commands.In this work we explore the utilization of commands to coordinate a shared task between a human and a robot, in a shared work space. Based on a known set of higher-level actions (e.g., pick-and-placement, hand-over, kitting) and the commands that trigger them, both a speech-based and graphical command-based interface are developed to investigate its use. While speech-based interaction might be more intuitive for coordination, in industrial settings background sounds and noise might hinder its capabilities. The graphical command-based interface circumvents this, while still demonstrating the capabilities of coordination. The developed architecture follows a knowledge-based approach, where the actions available to the robot are checked at runtime whether they suit the task and the current state of the world. Experimental results on industrially relevant assembly, kitting and hand-over tasks in a laboratory setting demonstrate that graphical command-based and speech-based coordination with high-level commands is effective for collaboration between a human and a robot.
Collapse
Affiliation(s)
- Alexandre Angleraud
- Cognitive Robotics Group, Faculty of Engineering and Natural Sciences, Tampere University, Tampere, Finland
| | - Amir Mehman Sefat
- Cognitive Robotics Group, Faculty of Engineering and Natural Sciences, Tampere University, Tampere, Finland
| | - Metodi Netzev
- Cognitive Robotics Group, Faculty of Engineering and Natural Sciences, Tampere University, Tampere, Finland
| | - Roel Pieters
- Cognitive Robotics Group, Faculty of Engineering and Natural Sciences, Tampere University, Tampere, Finland
| |
Collapse
|
16
|
Trends of Human-Robot Collaboration in Industry Contexts: Handover, Learning, and Metrics. SENSORS 2021; 21:s21124113. [PMID: 34203766 PMCID: PMC8232712 DOI: 10.3390/s21124113] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 06/03/2021] [Accepted: 06/08/2021] [Indexed: 12/03/2022]
Abstract
Repetitive industrial tasks can be easily performed by traditional robotic systems. However, many other works require cognitive knowledge that only humans can provide. Human-Robot Collaboration (HRC) emerges as an ideal concept of co-working between a human operator and a robot, representing one of the most significant subjects for human-life improvement.The ultimate goal is to achieve physical interaction, where handing over an object plays a crucial role for an effective task accomplishment. Considerable research work had been developed in this particular field in recent years, where several solutions were already proposed. Nonetheless, some particular issues regarding Human-Robot Collaboration still hold an open path to truly important research improvements. This paper provides a literature overview, defining the HRC concept, enumerating the distinct human-robot communication channels, and discussing the physical interaction that this collaboration entails. Moreover, future challenges for a natural and intuitive collaboration are exposed: the machine must behave like a human especially in the pre-grasping/grasping phases and the handover procedure should be fluent and bidirectional, for an articulated function development. These are the focus of the near future investigation aiming to shed light on the complex combination of predictive and reactive control mechanisms promoting coordination and understanding. Following recent progress in artificial intelligence, learning exploration stand as the key element to allow the generation of coordinated actions and their shaping by experience.
Collapse
|
17
|
Hamaya M, Tanaka K, Shibata Y, von Drigalski F, Nakashima C, Ijiri Y. Robotic Learning From Advisory and Adversarial Interactions Using a Soft Wrist. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3067232] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
18
|
Hua J, Zeng L, Li G, Ju Z. Learning for a Robot: Deep Reinforcement Learning, Imitation Learning, Transfer Learning. SENSORS 2021; 21:s21041278. [PMID: 33670109 PMCID: PMC7916895 DOI: 10.3390/s21041278] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 02/01/2021] [Accepted: 02/05/2021] [Indexed: 11/16/2022]
Abstract
Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning. The paper first reviews the main achievements and research of the robot, which were mainly based on the breakthrough of automatic control and hardware in mechanics. With the evolution of artificial intelligence, many pieces of research have made further progresses in adaptive and robust control. The survey reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots. Furthermore, deep reinforcement learning, imitation learning, and transfer learning in robot control are discussed in detail. Finally, major achievements based on these methods are summarized and analyzed thoroughly, and future research challenges are proposed.
Collapse
Affiliation(s)
- Jiang Hua
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; (J.H.); (L.Z.); (G.L.)
| | - Liangcai Zeng
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; (J.H.); (L.Z.); (G.L.)
| | - Gongfa Li
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; (J.H.); (L.Z.); (G.L.)
| | - Zhaojie Ju
- School of Computing, University of Portsmouth, Portsmouth 03801, UK
- Correspondence:
| |
Collapse
|