1
|
Cao J, Chen N. The Influence of Robots' Fairness on Humans' Reward-Punishment Behaviors and Trust in Human-Robot Cooperative Teams. Hum Factors 2024; 66:1103-1117. [PMID: 36218282 DOI: 10.1177/00187208221133272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
OBJECTIVE Based on social exchange theory, this study investigates the effects of robots' fairness and social status on humans' reward-punishment behaviors and trust in human-robot interactions. BACKGROUND In human-robot teamwork, robots show fair behaviors, dedication (altruistic unfair behaviors), and selfishness (self-interested unfair behaviors), but few studies have discussed the effects of these robots' behaviors on teamwork. METHOD This study adopts a 3 (the independent variable is the robot's fairness: self-interested unfair behaviors, fair behaviors, and altruistic unfair behaviors) × 3 (the moderator variable is the robot's social status: superior, peer, and subordinate) experimental design. Each participant and a robot completed the experimental task together through a computer. RESULTS When robots have different social statuses, the more altruistic the fairness of the robot, the more reward behaviors, the fewer punishment behaviors, and the higher human-robot trust of humans. Robots' higher social status weakens the influence of their fairness on humans' punishment behaviors. Human-robot trust will increase humans' reward behaviors and decrease humans' punishment behaviors. Humans' reward-punishment behaviors will increase repaired human-robot trust. CONCLUSION Robots' fairness has a significant impact on humans' reward-punishment behaviors and trust. Robots' social status moderates the effect of their fair behavior on humans' punishment behavior. There is an interaction between humans' reward-punishment behaviors and trust. APPLICATION The study can help to better understand the interaction mechanism of the human-robot team and can better serve the management and cooperation of the human-robot team by appropriately adjusting the robots' fairness and social status.
Collapse
Affiliation(s)
- Jiajia Cao
- Beijing University of Chemical Technology, Beijing, China
| | - Na Chen
- Beijing University of Chemical Technology, Beijing, China
| |
Collapse
|
2
|
Fath A, Liu Y, Xia T, Huston D. MARSBot: A Bristle-Bot Microrobot with Augmented Reality Steering Control for Wireless Structural Health Monitoring. Micromachines (Basel) 2024; 15:202. [PMID: 38398932 PMCID: PMC10891813 DOI: 10.3390/mi15020202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/22/2024] [Accepted: 01/26/2024] [Indexed: 02/25/2024]
Abstract
Microrobots are effective for monitoring infrastructure in narrow spaces. However, they have limited computing power, and most of them are not wireless and stable enough for accessing infrastructure in difficult-to-reach areas. In this paper, we describe the fabrication of a microrobot with bristle-bot locomotion using a novel centrifugal yaw-steering control scheme. The microrobot operates in a network consisting of an augmented reality headset and an access point to monitor infrastructures using augmented reality (AR) haptic controllers for human-robot collaboration. For the development of the microrobot, the dynamics of bristle-bots in several conditions were studied, and multiple additive manufacturing processes were investigated to develop the most suitable prototype for structural health monitoring. Using the proposed network, visual data are sent in real time to a hub connected to an AR headset upon request, which can be utilized by the operator to monitor and make decisions in the field. This allows the operators wearing an AR headset to inspect the exterior of a structure with their eyes, while controlling the surveying robot to monitor the interior side of the structure.
Collapse
Affiliation(s)
- Alireza Fath
- Department of Mechanical Engineering, University of Vermont, Burlington, VT 05405, USA; (A.F.); (Y.L.)
| | - Yi Liu
- Department of Mechanical Engineering, University of Vermont, Burlington, VT 05405, USA; (A.F.); (Y.L.)
| | - Tian Xia
- Department of Electrical and Biomedical Engineering, University of Vermont, Burlington, VT 05405, USA;
| | - Dryver Huston
- Department of Mechanical Engineering, University of Vermont, Burlington, VT 05405, USA; (A.F.); (Y.L.)
| |
Collapse
|
3
|
Shi Y, Zhu P, Wang T, Mai H, Yeh X, Yang L, Wang J. Dynamic Virtual Fixture Generation Based on Intra-Operative 3D Image Feedback in Robot-Assisted Minimally Invasive Thoracic Surgery. Sensors (Basel) 2024; 24:492. [PMID: 38257585 PMCID: PMC10820968 DOI: 10.3390/s24020492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 01/09/2024] [Accepted: 01/10/2024] [Indexed: 01/24/2024]
Abstract
This paper proposes a method for generating dynamic virtual fixtures with real-time 3D image feedback to facilitate human-robot collaboration in medical robotics. Seamless shared control in a dynamic environment, like that of a surgical field, remains challenging despite extensive research on collaborative control and planning. To address this problem, our method dynamically creates virtual fixtures to guide the manipulation of a trocar-placing robot arm using the force field generated by point cloud data from an RGB-D camera. Additionally, the "view scope" concept selectively determines the region for computational points, thereby reducing computational load. In a phantom experiment for robot-assisted port incision in minimally invasive thoracic surgery, our method demonstrates substantially improved accuracy for port placement, reducing error and completion time by 50% (p=1.06×10-2) and 35% (p=3.23×10-2), respectively. These results suggest that our proposed approach is promising in improving surgical human-robot collaboration.
Collapse
Affiliation(s)
- Yunze Shi
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (Y.S.); (T.W.); (H.M.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Peizhang Zhu
- Flexiv Ltd., Santa Clara, CA 95054, USA; (P.Z.); (X.Y.)
| | - Tengyue Wang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (Y.S.); (T.W.); (H.M.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Haonan Mai
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (Y.S.); (T.W.); (H.M.)
| | - Xiyang Yeh
- Flexiv Ltd., Santa Clara, CA 95054, USA; (P.Z.); (X.Y.)
| | - Liangjing Yang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (Y.S.); (T.W.); (H.M.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
- Department of Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| | - Jingfan Wang
- Flexiv Ltd., Santa Clara, CA 95054, USA; (P.Z.); (X.Y.)
| |
Collapse
|
4
|
Zhang Q, Liu Q, Duan J, Qin J. Research on Teleoperated Virtual Reality Human-Robot Five-Dimensional Collaboration System. Biomimetics (Basel) 2023; 8:605. [PMID: 38132544 PMCID: PMC10741399 DOI: 10.3390/biomimetics8080605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 12/01/2023] [Accepted: 12/12/2023] [Indexed: 12/23/2023] Open
Abstract
In the realm of industrial robotics, there is a growing challenge in simplifying human-robot collaboration (HRC), particularly in complex settings. The demand for more intuitive teleoperation systems is on the rise. However, optimizing robot control interfaces and streamlining teleoperation remains a formidable task due to the need for operators to possess specialized knowledge and the limitations of traditional methods regarding operational space and time constraints. This study addresses these issues by introducing a virtual reality (VR) HRC system with five-dimensional capabilities. Key advantages of our approach include: (1) real-time observation of robot work, whereby operators can seamlessly monitor the robot's real-time work environment and motion during teleoperation; (2) leveraging VR device capabilities, whereby the strengths of VR devices are harnessed to simplify robot motion control, significantly reducing the learning time for operators; and (3) adaptability across platforms and environments: our system effortlessly adapts to various platforms and working conditions, ensuring versatility across different terminals and scenarios. This system represents a significant advancement in addressing the challenges of HRC, offering improved teleoperation, simplified control, and enhanced accessibility, particularly for operators with limited prior exposure to robot operation. It elevates the overall HRC experience in complex scenarios.
Collapse
Affiliation(s)
- Qinglei Zhang
- China Institute of FTZ Supply Chain, Shanghai Maritime University, Shanghai 201306, China (J.Q.)
| | - Qinghao Liu
- Logistics Engineering College, Shanghai Maritime University, Shanghai 201306, China
| | - Jianguo Duan
- China Institute of FTZ Supply Chain, Shanghai Maritime University, Shanghai 201306, China (J.Q.)
| | - Jiyun Qin
- China Institute of FTZ Supply Chain, Shanghai Maritime University, Shanghai 201306, China (J.Q.)
| |
Collapse
|
5
|
Kana S, Gurnani J, Ramanathan V, Ariffin MZ, Turlapati SH, Campolo D. Learning Compliant Box-in-Box Insertion through Haptic-Based Robotic Teleoperation. Sensors (Basel) 2023; 23:8721. [PMID: 37960421 PMCID: PMC10648443 DOI: 10.3390/s23218721] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 11/15/2023]
Abstract
In modern logistics, the box-in-box insertion task is representative of a wide range of packaging applications, and automating compliant object insertion is difficult due to challenges in modelling the object deformation during insertion. Using Learning from Demonstration (LfD) paradigms, which are frequently used in robotics to facilitate skill transfer from humans to robots, can be one solution for complex tasks that are difficult to mathematically model. In order to automate the box-in-box insertion task for packaging applications, this study makes use of LfD techniques. The proposed framework has three phases. Firstly, a master-slave teleoperated robot system is used in the initial phase to haptically demonstrate the insertion task. Then, the learning phase involves identifying trends in the demonstrated trajectories using probabilistic methods, in this case, Gaussian Mixture Regression. In the third phase, the insertion task is generalised, and the robot adjusts to any object position using barycentric interpolation. This method is novel because it tackles tight insertion by taking advantage of the boxes' natural compliance, making it possible to complete the task even with a position-controlled robot. To determine whether the strategy is generalisable and repeatable, experimental validation was carried out.
Collapse
Affiliation(s)
| | | | | | | | | | - Domenico Campolo
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore; (S.K.); (J.G.); (V.R.); (M.Z.A.); (S.H.T.)
| |
Collapse
|
6
|
Yan Y, Su H, Jia Y. Modeling and Analysis of Human Comfort in Human-Robot Collaboration. Biomimetics (Basel) 2023; 8:464. [PMID: 37887595 PMCID: PMC10604725 DOI: 10.3390/biomimetics8060464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 09/24/2023] [Accepted: 09/25/2023] [Indexed: 10/28/2023] Open
Abstract
The emergence and recent development of collaborative robots have introduced a safer and more efficient human-robot collaboration (HRC) manufacturing environment. Since the release of COBOTs, a great amount of research efforts have been focused on improving robot working efficiency, user safety, human intention detection, etc., while one significant factor-human comfort-has been frequently ignored. The comfort factor is critical to COBOT users due to its great impact on user acceptance. In previous studies, there is a lack of a mathematical-model-based approach to quantitatively describe and predict human comfort in HRC scenarios. Also, few studies have discussed the cases when multiple comfort factors take effect simultaneously. In this study, a multi-linear-regression-based general human comfort prediction model is proposed under human-robot collaboration scenarios, which is able to accurately predict the comfort levels of humans in multi-factor situations. The proposed method in this paper tackled these two gaps at the same time and also demonstrated the effectiveness of the approach with its high prediction accuracy. The overall average accuracy among all participants is 81.33%, while the overall maximum value is 88.94%, and the overall minimum value is 72.53%. The model uses subjective comfort rating feedback from human subjects as training and testing data. Experiments have been implemented, and the final results proved the effectiveness of the proposed approach in identifying human comfort levels in HRC.
Collapse
Affiliation(s)
| | - Haotian Su
- Department of Automotive Engineering, International Center for Automotive Research, Clemson University, Greenville, SC 29607, USA;
| | - Yunyi Jia
- Department of Automotive Engineering, International Center for Automotive Research, Clemson University, Greenville, SC 29607, USA;
| |
Collapse
|
7
|
Muratore L, Laurenzi A, De Luca A, Bertoni L, Torielli D, Baccelliere L, Del Bianco E, Tsagarakis NG. A Unified Multimodal Interface for the RELAX High-Payload Collaborative Robot. Sensors (Basel) 2023; 23:7735. [PMID: 37765791 PMCID: PMC10534361 DOI: 10.3390/s23187735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/27/2023] [Accepted: 08/09/2023] [Indexed: 09/29/2023]
Abstract
This manuscript introduces a mobile cobot equipped with a custom-designed high payload arm called RELAX combined with a novel unified multimodal interface that facilitates Human-Robot Collaboration (HRC) tasks requiring high-level interaction forces on a real-world scale. The proposed multimodal framework is capable of combining physical interaction, Ultra Wide-Band (UWB) radio sensing, a Graphical User Interface (GUI), verbal control, and gesture interfaces, combining the benefits of all these different modalities and allowing humans to accurately and efficiently command the RELAX mobile cobot and collaborate with it. The effectiveness of the multimodal interface is evaluated in scenarios where the operator guides RELAX to reach designated locations in the environment while avoiding obstacles and performing high-payload transportation tasks, again in a collaborative fashion. The results demonstrate that a human co-worker can productively complete complex missions and command the RELAX mobile cobot using the proposed multimodal interaction framework.
Collapse
Affiliation(s)
- Luca Muratore
- Humanoids and Human-Centered Mechatronics Research Line, Istituto Italiano di Tecnologia (IIT), Via Morego 30, 16163 Genova, Italy
| | - Arturo Laurenzi
- Humanoids and Human-Centered Mechatronics Research Line, Istituto Italiano di Tecnologia (IIT), Via Morego 30, 16163 Genova, Italy
| | - Alessio De Luca
- Humanoids and Human-Centered Mechatronics Research Line, Istituto Italiano di Tecnologia (IIT), Via Morego 30, 16163 Genova, Italy
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), Università di Genova, 16145 Genova, Italy
| | - Liana Bertoni
- Humanoids and Human-Centered Mechatronics Research Line, Istituto Italiano di Tecnologia (IIT), Via Morego 30, 16163 Genova, Italy
- Department of Information Engineering (DII), Università di Pisa, 56122 Pisa, Italy
| | - Davide Torielli
- Humanoids and Human-Centered Mechatronics Research Line, Istituto Italiano di Tecnologia (IIT), Via Morego 30, 16163 Genova, Italy
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), Università di Genova, 16145 Genova, Italy
| | - Lorenzo Baccelliere
- Humanoids and Human-Centered Mechatronics Research Line, Istituto Italiano di Tecnologia (IIT), Via Morego 30, 16163 Genova, Italy
| | - Edoardo Del Bianco
- Humanoids and Human-Centered Mechatronics Research Line, Istituto Italiano di Tecnologia (IIT), Via Morego 30, 16163 Genova, Italy
- DISI, Università di Trento, 38123 Trento, Italy
| | - Nikos G Tsagarakis
- Humanoids and Human-Centered Mechatronics Research Line, Istituto Italiano di Tecnologia (IIT), Via Morego 30, 16163 Genova, Italy
| |
Collapse
|
8
|
Briken K, Moore J, Scholarios D, Rose E, Sherlock A. Industry 5 and the Human in Human-Centric Manufacturing. Sensors (Basel) 2023; 23:6416. [PMID: 37514710 PMCID: PMC10386219 DOI: 10.3390/s23146416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 07/03/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023]
Abstract
Industry 4 (I4) was a revolutionary new stage for technological progress in manufacturing which promised a new level of interconnectedness between a diverse range of technologies. Sensors, as a point technology, play an important role in these developments, facilitating human-machine interaction and enabling data collection for system-level technologies. Concerns for human labour working in I4 environments (e.g., health and safety, data generation and extraction) are acknowledged by Industry 5 (I5), an update of I4 which promises greater attention to human-machine relations through a values-driven approach to collaboration and co-design. This article explores how engineering experts integrate values promoted by policy-makers into both their thinking about the human in their work and in their writing. This paper demonstrates a novel interdisciplinary approach in which an awareness of different disciplinary epistemic values associated with humans and work guides a systematic literature review and interpretive coding of practice-focussed engineering papers. Findings demonstrate evidence of an I5 human-centric approach: a high value for employees as "end-users" of innovative systems in manufacturing; and an increase in output addressing human activity in modelling and the technologies available to address this concern. However, epistemic publishing practices show that efforts to increase the effectiveness of manufacturing systems often neglect worker voice.
Collapse
Affiliation(s)
- Kendra Briken
- Department of Work, Employment and Organisation, Strathclyde Business School, University of Strathclyde, Glasgow G4 0QU, UK
| | - Jed Moore
- Department of Work, Employment and Organisation, Strathclyde Business School, University of Strathclyde, Glasgow G4 0QU, UK
| | - Dora Scholarios
- Department of Work, Employment and Organisation, Strathclyde Business School, University of Strathclyde, Glasgow G4 0QU, UK
| | - Emily Rose
- Law School, University of Strathclyde, Glasgow G4 0LT, UK
| | - Andrew Sherlock
- National Manufacturing Institute Scotland, Renfrew PA3 2EF, UK
| |
Collapse
|
9
|
Büsch L, Koch J, Schoepflin D, Schulze M, Schüppstuhl T. Towards Recognition of Human Actions in Collaborative Tasks with Robots: Extending Action Recognition with Tool Recognition Methods. Sensors (Basel) 2023; 23:5718. [PMID: 37420879 PMCID: PMC10304406 DOI: 10.3390/s23125718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
This paper presents a novel method for online tool recognition in manual assembly processes. The goal was to develop and implement a method that can be integrated with existing Human Action Recognition (HAR) methods in collaborative tasks. We examined the state-of-the-art for progress detection in manual assembly via HAR-based methods, as well as visual tool-recognition approaches. A novel online tool-recognition pipeline for handheld tools is introduced, utilizing a two-stage approach. First, a Region Of Interest (ROI) was extracted by determining the wrist position using skeletal data. Afterward, this ROI was cropped, and the tool located within this ROI was classified. This pipeline enabled several algorithms for object recognition and demonstrated the generalizability of our approach. An extensive training dataset for tool-recognition purposes is presented, which was evaluated with two image-classification approaches. An offline pipeline evaluation was performed with twelve tool classes. Additionally, various online tests were conducted covering different aspects of this vision application, such as two assembly scenarios, unknown instances of known classes, as well as challenging backgrounds. The introduced pipeline was competitive with other approaches regarding prediction accuracy, robustness, diversity, extendability/flexibility, and online capability.
Collapse
Affiliation(s)
- Lukas Büsch
- Hamburg University of Technology, Institute of Aircraft Production Technology, Denickestraße 17, 21073 Hamburg, Germany
| | | | | | | | | |
Collapse
|
10
|
Erratum: Flexible sensor concept and an integrated collision sensing for efficient human-robot collaboration using 3D local global sensors. Front Robot AI 2023; 10:1228130. [PMID: 37388718 PMCID: PMC10304295 DOI: 10.3389/frobt.2023.1228130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 05/24/2023] [Indexed: 07/01/2023] Open
Abstract
[This corrects the article DOI: 10.3389/frobt.2023.1028411.].
Collapse
|
11
|
Kopácsi L, Baffy B, Baranyi G, Skaf J, Sörös G, Szeier S, Lőrincz A, Sonntag D. Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction. Sensors (Basel) 2023; 23:s23115126. [PMID: 37299853 DOI: 10.3390/s23115126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 05/16/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
Allocentric semantic 3D maps are highly useful for a variety of human-machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants due to the different perspectives. Particularly, when considering the viewpoint of a small robot, which significantly differs from the viewpoint of a human. In order to overcome this issue, and to establish common ground, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher (i.e., human) viewpoints but are inferior from lower viewpoints, such as that of a small robot. We propose several approaches for acquiring semantic labels for images taken from unusual perspectives. We start with a partial 3D semantic reconstruction from the human perspective that we transfer and adapt to the small robot's perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and a real environment using a robot car with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot's perspective, with accuracy comparable to the original one. In addition, we exploit the gained information and improve the recognition performance of the deep network for the lower viewpoints and show that the small robot alone is capable of generating high-quality semantic maps for the human partner. The computations are close to real-time, so the approach enables interactive applications.
Collapse
Affiliation(s)
- László Kopácsi
- Department of Interactive Machine Learning, German Research Center for Artificial Intelligence (DFKI), 66123 Saarbrücken, Germany
- Department of Artificial Intelligence, Eötvös Loránd University, 1053 Budapest, Hungary
| | - Benjámin Baffy
- Department of Artificial Intelligence, Eötvös Loránd University, 1053 Budapest, Hungary
| | - Gábor Baranyi
- Department of Artificial Intelligence, Eötvös Loránd University, 1053 Budapest, Hungary
| | - Joul Skaf
- Department of Artificial Intelligence, Eötvös Loránd University, 1053 Budapest, Hungary
| | | | - Szilvia Szeier
- Department of Artificial Intelligence, Eötvös Loránd University, 1053 Budapest, Hungary
| | - András Lőrincz
- Department of Artificial Intelligence, Eötvös Loránd University, 1053 Budapest, Hungary
| | - Daniel Sonntag
- Department of Interactive Machine Learning, German Research Center for Artificial Intelligence (DFKI), 66123 Saarbrücken, Germany
- Department of Applied Artificial Intelligence, University of Oldenburg, 26129 Oldenburg, Germany
| |
Collapse
|
12
|
Podgorelec D, Uran S, Nerat A, Bratina B, Pečnik S, Dimec M, Žaberl F, Žalik B, Šafarič R. LiDAR-Based Maintenance of a Safe Distance between a Human and a Robot Arm. Sensors (Basel) 2023; 23:s23094305. [PMID: 37177509 PMCID: PMC10181461 DOI: 10.3390/s23094305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/20/2023] [Accepted: 04/23/2023] [Indexed: 05/15/2023]
Abstract
This paper demonstrates the capabilities of three-dimensional (3D) LiDAR scanners in supporting a safe distance maintenance functionality in human-robot collaborative applications. The use of such sensors is severely under-utilised in collaborative work with heavy-duty robots. However, even with a relatively modest proprietary 3D sensor prototype, a respectable level of safety has been achieved, which should encourage the development of such applications in the future. Its associated intelligent control system (ICS) is presented, as well as the sensor's technical characteristics. It acquires the positions of the robot and the human periodically, predicts their positions in the near future optionally, and adjusts the robot's speed to keep its distance from the human above the protective separation distance. The main novelty is the possibility to load an instance of the robot programme into the ICS, which then precomputes the future position and pose of the robot. Higher accuracy and safety are provided, in comparison to traditional predictions from known real-time and near-past positions and poses. The use of a 3D LiDAR scanner in a speed and separation monitoring application and, particularly, its specific placing, are also innovative and advantageous. The system was validated by analysing videos taken by the reference validation camera visually, which confirmed its safe operation in reasonably limited ranges of robot and human speeds.
Collapse
Affiliation(s)
- David Podgorelec
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Suzana Uran
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Andrej Nerat
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Božidar Bratina
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Sašo Pečnik
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Marjan Dimec
- FOKUS TECH d.o.o., Ulica Zofke Kvedrove 9, SI-3000 Celje, Slovenia
| | - Franc Žaberl
- FANUC ADRIA d.o.o., Ipavčeva ulica 21, SI-3000 Celje, Slovenia
| | - Borut Žalik
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Riko Šafarič
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| |
Collapse
|
13
|
Orsag L, Stipancic T, Koren L. Towards a Safe Human-Robot Collaboration Using Information on Human Worker Activity. Sensors (Basel) 2023; 23:1283. [PMID: 36772323 PMCID: PMC9920522 DOI: 10.3390/s23031283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 01/04/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
Most industrial workplaces involving robots and other apparatus operate behind the fences to remove defects, hazards, or casualties. Recent advancements in machine learning can enable robots to co-operate with human co-workers while retaining safety, flexibility, and robustness. This article focuses on the computation model, which provides a collaborative environment through intuitive and adaptive human-robot interaction (HRI). In essence, one layer of the model can be expressed as a set of useful information utilized by an intelligent agent. Within this construction, a vision-sensing modality can be broken down into multiple layers. The authors propose a human-skeleton-based trainable model for the recognition of spatiotemporal human worker activity using LSTM networks, which can achieve a training accuracy of 91.365%, based on the InHARD dataset. Together with the training results, results related to aspects of the simulation environment and future improvements of the system are discussed. By combining human worker upper body positions with actions, the perceptual potential of the system is increased, and human-robot collaboration becomes context-aware. Based on the acquired information, the intelligent agent gains the ability to adapt its behavior according to its dynamic and stochastic surroundings.
Collapse
|
14
|
Tsolakis N, Gasteratos A. Sensor-Driven Human-Robot Synergy: A Systems Engineering Approach. Sensors (Basel) 2022; 23:21. [PMID: 36616620 PMCID: PMC9823401 DOI: 10.3390/s23010021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/30/2022] [Accepted: 12/18/2022] [Indexed: 06/17/2023]
Abstract
Knowledge-based synergistic automation is a potential intermediate option between the opposite extremes of manual and fully automated robotic labor in agriculture. Disruptive information and communication technologies (ICT) and sophisticated solutions for human-robot interaction (HRI) endow a skilled farmer with enhanced capabilities to perform agricultural tasks more efficiently and productively. This research aspires to apply systems engineering principles to assess the design of a conceptual human-robot synergistic platform enabled by a sensor-driven ICT sub-system. In particular, this paper firstly presents an overview of a use case, including a human-robot synergistic platform comprising a drone, a mobile platform, and wearable equipment. The technology framework constitutes a paradigm of human-centric worker-robot logistics synergy for high-value crops, which is applicable in operational environments of outdoor in-field harvesting and handling operations. Except for the physical sub-system, the ICT sub-system of the robotic framework consists of an extended sensor network for enabling data acquisition to extract the context (e.g., worker's status, environment awareness) and plan and schedule the robotic agents of the framework. Secondly, this research explicitly presents the underpinning Design Structure Matrix (DSM) that systematically captures the interrelations between the sensors in the platform and data/information signals for enabling synergistic operations. The employed Systems Engineering approach provides a comprehensible analysis of the baseline structure existing in the examined human-robot synergy platform. In particular, the applied DSM allows for understanding and synthesizing a sensor sub-system's architecture and enriching its efficacy by informing targeted interventions and reconfiguring the developed robotic solution modules depending on the required farming tasks at an orchard. Human-centric solutions for the agrarian sector demand careful study of the features that the particular agri-field possesses; thus, the insight DSM provides to system designers can turn out to be useful in the investigation of other similar data-driven applications.
Collapse
Affiliation(s)
- Naoum Tsolakis
- Department of Supply Chain Management, International Hellenic University, 570 01 Thessaloniki, Greece
- Institute for Bio-Economy and Agri-Technology (IBO), Centre of Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd., 570 01 Thessaloniki, Greece
| | - Antonios Gasteratos
- Department of Production and Management Engineering, Democritus University of Thrace, Vas. Sophias 12, 671 32 Xanthi, Greece
| |
Collapse
|
15
|
Yan Y, Jia Y. A Review on Human Comfort Factors, Measurements, and Improvements in Human-Robot Collaboration. Sensors (Basel) 2022; 22:7431. [PMID: 36236530 PMCID: PMC9572111 DOI: 10.3390/s22197431] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 09/23/2022] [Accepted: 09/27/2022] [Indexed: 06/16/2023]
Abstract
As the development of robotics technologies for collaborative robots (COBOTs), the applications of human-robot collaboration (HRC) have been growing in the past decade. Despite the tremendous efforts from both academia and industry, the overall usage and acceptance of COBOTs are still not so high as expected. One of the major affecting factors is the comfort of humans in HRC, which is usually less emphasized in COBOT development; however, it is critical to the user acceptance during HRC. Therefore, this paper gives a review of human comfort in HRC including the influential factors of human comfort, measurement of human comfort in terms of subjective and objective manners, and human comfort improvement approaches in the context of HRC. Discussions on each topic are also conducted based on the review and analysis.
Collapse
|
16
|
Su YF, Tsai TH, Kuo KL, Wu CH, Tsai CY, Lu YM, Hwang SL, Lin PC, Lieu AS, Lin CL, Chang CH. Potential Roles of Teamwork and Unmet Needs on Surgical Learning Curves of Spinal Robotic Screw Placement. J Multidiscip Healthc 2022; 15:1971-1978. [PMID: 36105672 PMCID: PMC9464635 DOI: 10.2147/jmdh.s380707] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 08/26/2022] [Indexed: 11/30/2022] Open
Abstract
Background The aim of this study was to investigate the learning curve of robotic spine surgery quantitatively with the well-described power law of practice. Methods Kaohsiung Medical University Hospital set up a robotic spine surgery team by the neurosurgery department in 2013 and the orthopedic department joined the well-established team in 2014. A total of consecutive 150 cases received robotic assisted spinal surgery. The 150 cases, with 841 transpedicular screws were enrolled into 3 groups: the first 50 cases performed by neurosurgeons, the first 50 cases by orthopedic surgeons, and 50 cases by neurosurgeons after the orthopedic surgeons joined the team. The time per screw and accuracy by each group and individual surgeon were analyzed. Results The time per screw for each group was 9.56 ± 4.19, 7.29 ± 3.64, and 8.74 ± 5.77 minutes, respectively, with p-value 0.0017. The accuracy was 99.6% (253/254), 99.5% (361/363), and 99.1% (222/224), respectively, with p-value 0.77. Though the first group took time significantly more on per screw placement but without significance on the nonlinear parallelism F-test. Analysis of 5 surgeons and their first 10 cases of short segment surgery showed the time per screw by each surgeon was 12.28 ± 5.21, 6.38 ± 1.54, 8.68 ± 3.10, 6.33 ± 1.90, and 6.73 ± 1.81 minutes. The first surgeon who initiated the robotic spine surgery took significantly more time per screw, and the nonlinear parallelism test also revealed only the first surgeon had a steeper learning curve. Conclusion This is the first study to demonstrate that differences of learning curves between individual surgeons and teams. The roles of teamwork and the unmet needs due to lack of active perception are discussed.
Collapse
Affiliation(s)
- Yu-Feng Su
- Graduate Institute of Clinical Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan.,Department of Neurosurgery, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan.,Division of Neurosurgery, Department of Surgery, Kaohsiung Municipal Ta-Tung Hospital, Kaohsiung, Taiwan
| | - Tai-Hsin Tsai
- Department of Neurosurgery, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan.,Division of Neurosurgery, Department of Surgery, Kaohsiung Municipal Ta-Tung Hospital, Kaohsiung, Taiwan.,Graduate Institute of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Keng-Liang Kuo
- Department of Neurosurgery, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan.,Graduate Institute of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Chieh-Hsin Wu
- Department of Neurosurgery, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan.,Graduate Institute of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Cheng-Yu Tsai
- Graduate Institute of Clinical Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan.,Department of Neurosurgery, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan
| | - Yen-Mou Lu
- Department of Orthopedics, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan
| | - Shiuh-Lin Hwang
- Department of Spinal Surgery, Chi-Hsien Spine Hospital, Kaohsiung, Taiwan
| | - Pei-Chen Lin
- Department of Oral Hygiene, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Ann-Shung Lieu
- Department of Neurosurgery, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan
| | - Chih-Lung Lin
- Graduate Institute of Clinical Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan.,Department of Neurosurgery, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan.,Graduate Institute of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Chih-Hui Chang
- Graduate Institute of Clinical Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan.,Department of Neurosurgery, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan.,Graduate Institute of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| |
Collapse
|
17
|
Costa GDM, Petry MR, Moreira AP. Augmented Reality for Human-Robot Collaboration and Cooperation in Industrial Applications: A Systematic Literature Review. Sensors (Basel) 2022; 22:s22072725. [PMID: 35408339 PMCID: PMC9003100 DOI: 10.3390/s22072725] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/24/2022] [Accepted: 03/30/2022] [Indexed: 02/05/2023]
Abstract
With the continuously growing usage of collaborative robots in industry, the need for achieving a seamless human-robot interaction has also increased, considering that it is a key factor towards reaching a more flexible, effective, and efficient production line. As a prominent and prospective tool to support the human operator to understand and interact with robots, Augmented Reality (AR) has been employed in numerous human-robot collaborative and cooperative industrial applications. Therefore, this systematic literature review critically appraises 32 papers' published between 2016 and 2021 to identify the main employed AR technologies, outline the current state of the art of augmented reality for human-robot collaboration and cooperation, and point out future developments for this research field. Results suggest that this is still an expanding research field, especially with the advent of recent advancements regarding head-mounted displays (HMDs). Moreover, projector-based and HMDs developed approaches are showing promising positive influences over operator-related aspects such as performance, task awareness, and safety feeling, even though HMDs need further maturation in ergonomic aspects. Further research should focus on large-scale assessment of the proposed solutions in industrial environments, involving the solution's target audience, and on establishing standards and guidelines for developing AR assistance systems.
Collapse
Affiliation(s)
- Gabriel de Moura Costa
- Department of Electrical and Computer Engineering, Faculdade de Engenharia da Universidade do Porto (FEUP), 4200-465 Porto, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering Technology and Science, 4200-465 Porto, Portugal;
- Correspondence:
| | - Marcelo Roberto Petry
- INESC TEC—Institute for Systems and Computer Engineering Technology and Science, 4200-465 Porto, Portugal;
| | - António Paulo Moreira
- Department of Electrical and Computer Engineering, Faculdade de Engenharia da Universidade do Porto (FEUP), 4200-465 Porto, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering Technology and Science, 4200-465 Porto, Portugal;
| |
Collapse
|
18
|
Pascher M, Kronhardt K, Franzen T, Gruenefeld U, Schneegass S, Gerken J. My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments. Sensors (Basel) 2022; 22:755. [PMID: 35161503 DOI: 10.3390/s22030755] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 01/14/2022] [Accepted: 01/15/2022] [Indexed: 11/25/2022]
Abstract
Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they “see” the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot’s surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.
Collapse
|
19
|
Maruyama T, Ueshiba T, Tada M, Toda H, Endo Y, Domae Y, Nakabo Y, Mori T, Suita K. Digital Twin-Driven Human Robot Collaboration Using a Digital Human. Sensors (Basel) 2021; 21:8266. [PMID: 34960355 DOI: 10.3390/s21248266] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/17/2022]
Abstract
Advances are being made in applying digital twin (DT) and human–robot collaboration (HRC) to industrial fields for safe, effective, and flexible manufacturing. Using a DT for human modeling and simulation enables ergonomic assessment during working. In this study, a DT-driven HRC system was developed that measures the motions of a worker and simulates the working progress and physical load based on digital human (DH) technology. The proposed system contains virtual robot, DH, and production management modules that are integrated seamlessly via wireless communication. The virtual robot module contains the robot operating system and enables real-time control of the robot based on simulations in a virtual environment. The DH module measures and simulates the worker’s motion, behavior, and physical load. The production management module performs dynamic scheduling based on the predicted working progress under ergonomic constraints. The proposed system was applied to a parts-picking scenario, and its effectiveness was evaluated in terms of work monitoring, progress prediction, dynamic scheduling, and ergonomic assessment. This study demonstrates a proof-of-concept for introducing DH technology into DT-driven HRC for human-centered production systems.
Collapse
|
20
|
Khawaja FI, Kanazawa A, Kinugawa J, Kosuge K. A Human-Following Motion Planning and Control Scheme for Collaborative Robots Based on Human Motion Prediction. Sensors (Basel) 2021; 21:s21248229. [PMID: 34960323 PMCID: PMC8706253 DOI: 10.3390/s21248229] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 12/03/2021] [Accepted: 12/04/2021] [Indexed: 11/16/2022]
Abstract
Human–Robot Interaction (HRI) for collaborative robots has become an active research topic recently. Collaborative robots assist human workers in their tasks and improve their efficiency. However, the worker should also feel safe and comfortable while interacting with the robot. In this paper, we propose a human-following motion planning and control scheme for a collaborative robot which supplies the necessary parts and tools to a worker in an assembly process in a factory. In our proposed scheme, a 3-D sensing system is employed to measure the skeletal data of the worker. At each sampling time of the sensing system, an optimal delivery position is estimated using the real-time worker data. At the same time, the future positions of the worker are predicted as probabilistic distributions. A Model Predictive Control (MPC)-based trajectory planner is used to calculate a robot trajectory that supplies the required parts and tools to the worker and follows the predicted future positions of the worker. We have installed our proposed scheme in a collaborative robot system with a 2-DOF planar manipulator. Experimental results show that the proposed scheme enables the robot to provide anytime assistance to a worker who is moving around in the workspace while ensuring the safety and comfort of the worker.
Collapse
Affiliation(s)
- Fahad Iqbal Khawaja
- Center for Transformative AI and Robotics, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan; (A.K.); (J.K.); or (K.K.)
- Robotics and Intelligent Systems Engineering (RISE) Laboratory, Department of Robotics and Artificial Intelligence, School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Sector H-12, Islamabad 44000, Pakistan
- Correspondence: or
| | - Akira Kanazawa
- Center for Transformative AI and Robotics, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan; (A.K.); (J.K.); or (K.K.)
| | - Jun Kinugawa
- Center for Transformative AI and Robotics, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan; (A.K.); (J.K.); or (K.K.)
| | - Kazuhiro Kosuge
- Center for Transformative AI and Robotics, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan; (A.K.); (J.K.); or (K.K.)
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong
| |
Collapse
|
21
|
von Salm-Hoogstraeten S, Müsseler J. Human Cognition in Interaction With Robots: Taking the Robot's Perspective Into Account. Hum Factors 2021; 63:1396-1407. [PMID: 32648797 PMCID: PMC8593285 DOI: 10.1177/0018720820933764] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 05/18/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE The present study investigated whether and how different human-robot interactions in a physically shared workspace influenced human stimulus-response (SR) relationships. BACKGROUND Human work is increasingly performed in interaction with advanced robots. Since human-robot interaction often takes place in physical proximity, it is crucial to investigate the effects of the robot on human cognition. METHOD In two experiments, we compared conditions in which humans interacted with a robot that they either remotely controlled or monitored under otherwise comparable conditions in the same shared workspace. The cognitive extent to which the participants took the robot's perspective served as a dependent variable and was evaluated with a SR compatibility task. RESULTS The results showed pronounced compatibility effects from the robot's perspective when participants had to take the perspective of the robot during the task, but significantly reduced compatibility effects when human and robot did not interact. In both experiments, compatibility effects from the robot's perspective resulted in statistically significant differences in response times and in error rates between compatible and incompatible conditions. CONCLUSION We concluded that SR relationships from the perspective of the robot need to be considered when designing shared workspaces that require users to take the perspective of the robot. APPLICATION The results indicate changed compatibility relationships when users share their workplace with an interacting robot and therefore have to take its perspective from time to time. The perspective-dependent processing times are expected to be accompanied by corresponding error rates, which might affect-for instance-safety and efficiency in a production process.
Collapse
|
22
|
Himmelsbach UB, Wendt TM, Hangst N, Gawron P, Stiglmeier L. Human-Machine Differentiation in Speed and Separation Monitoring for Improved Efficiency in Human-Robot Collaboration. Sensors (Basel) 2021; 21:s21217144. [PMID: 34770450 PMCID: PMC8587097 DOI: 10.3390/s21217144] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 10/22/2021] [Accepted: 10/26/2021] [Indexed: 11/16/2022]
Abstract
Human-robot collaborative applications have been receiving increasing attention in industrial applications. The efficiency of the applications is often quite low compared to traditional robotic applications without human interaction. Especially for applications that use speed and separation monitoring, there is potential to increase the efficiency with a cost-effective and easy to implement method. In this paper, we proposed to add human-machine differentiation to the speed and separation monitoring in human-robot collaborative applications. The formula for the protective separation distance was extended with a variable for the kind of object that approaches the robot. Different sensors for differentiation of human and non-human objects are presented. Thermal cameras are used to take measurements in a proof of concept. Through differentiation of human and non-human objects, it is possible to decrease the protective separation distance between the robot and the object and therefore increase the overall efficiency of the collaborative application.
Collapse
|
23
|
Yu X, Zhang S, Liu Y, Li B, Ma Y, Min G. Co-carrying an object by robot in cooperation with humans using visual and force sensing. Philos Trans A Math Phys Eng Sci 2021; 379:20200373. [PMID: 34398646 DOI: 10.1098/rsta.2020.0373] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/01/2021] [Indexed: 06/13/2023]
Abstract
Human-robot collaboration poses many challenges where humans and robots work inside a shared workspace. Robots collaborating with humans indirectly bring difficulties for accomplishing co-carrying tasks. In our work, we focus on co-carrying an object by robots in cooperation with humans using visual and force sensing. A framework using visual and force sensing is proposed for human-robot co-carrying tasks, enabling robots to actively cooperate with humans and reduce human efforts. Visual sensing for perceiving human motion is involved in admittance-based force control, and a hybrid controller combining visual servoing with force feedback is proposed which generates refined robot motion. The proposed framework is validated by a co-carrying task in experiments. There exist two phases in experimental processes: in Phase 1, the human hand holds one side of the box object, and the robot gripper of the Baxter robot automatically approaches to the other side of the box object and finally holds it; in Phase 2, the human and the Baxter robot co-carry the box object over a distance to different target positions. This article is part of the theme issue 'Towards symbiotic autonomous systems'.
Collapse
Affiliation(s)
- Xinbo Yu
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Shuang Zhang
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Yu Liu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640, People's Republic of China
- Guangzhou Institute of Modern Industrial Technology, South China University of Technology, Guangzhou 511458, People's Republic of China
| | - Bin Li
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Yinsong Ma
- School of Electronic and Information Engineering, Beihang University, Beijing 100191, People's Republic of China
| | - Gaochen Min
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| |
Collapse
|
24
|
Aswad FE, Djogdom GVT, Otis MJD, Ayena JC, Meziane R. Image Generation for 2D-CNN Using Time-Series Signal Features from Foot Gesture Applied to Select Cobot Operating Mode. Sensors (Basel) 2021; 21:5743. [PMID: 34502634 PMCID: PMC8434500 DOI: 10.3390/s21175743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/17/2021] [Accepted: 08/23/2021] [Indexed: 11/18/2022]
Abstract
Advances in robotics are part of reducing the burden associated with manufacturing tasks in workers. For example, the cobot could be used as a "third-arm" during the assembling task. Thus, the necessity of designing new intuitive control modalities arises. This paper presents a foot gesture approach centered on robot control constraints to switch between four operating modalities. This control scheme is based on raw data acquired by an instrumented insole located at a human's foot. It is composed of an inertial measurement unit (IMU) and four force sensors. Firstly, a gesture dictionary was proposed and, from data acquired, a set of 78 features was computed with a statistical approach, and later reduced to 3 via variance analysis ANOVA. Then, the time series collected data were converted into a 2D image and provided as an input for a 2D convolutional neural network (CNN) for the recognition of foot gestures. Every gesture was assimilated to a predefined cobot operating mode. The offline recognition rate appears to be highly dependent on the features to be considered and their spatial representation in 2D image. We achieve a higher recognition rate for a specific representation of features by sets of triangular and rectangular forms. These results were encouraging in the use of CNN to recognize foot gestures, which then will be associated with a command to control an industrial robot.
Collapse
Affiliation(s)
- Fadwa El Aswad
- Laboratory of Automation and Robotic interaction (LAR.i), Department of Applied Sciences, Université du Québec à Chicoutimi (UQAC), 555 Boulevard de l’Université, Chicoutimi, QC G7H 2B1, Canada; (F.E.A.); (G.V.T.D.); (R.M.)
| | - Gilde Vanel Tchane Djogdom
- Laboratory of Automation and Robotic interaction (LAR.i), Department of Applied Sciences, Université du Québec à Chicoutimi (UQAC), 555 Boulevard de l’Université, Chicoutimi, QC G7H 2B1, Canada; (F.E.A.); (G.V.T.D.); (R.M.)
- Technological Institute of Industrial Maintenance (ITMI), Sept-Iles College, 175 Rue de la Vérendrye, Sept-Iles, QC G4R 5B7, Canada
| | - Martin J.-D. Otis
- Laboratory of Automation and Robotic interaction (LAR.i), Department of Applied Sciences, Université du Québec à Chicoutimi (UQAC), 555 Boulevard de l’Université, Chicoutimi, QC G7H 2B1, Canada; (F.E.A.); (G.V.T.D.); (R.M.)
| | - Johannes C. Ayena
- Communications and Microelectronic Integration Laboratory (LACIME), Department of Electrical Engineering, École de Technologie Supérieure, 1100 Rue Notre-Dame Ouest, Montréal, QC H3C1K3, Canada;
| | - Ramy Meziane
- Laboratory of Automation and Robotic interaction (LAR.i), Department of Applied Sciences, Université du Québec à Chicoutimi (UQAC), 555 Boulevard de l’Université, Chicoutimi, QC G7H 2B1, Canada; (F.E.A.); (G.V.T.D.); (R.M.)
- Technological Institute of Industrial Maintenance (ITMI), Sept-Iles College, 175 Rue de la Vérendrye, Sept-Iles, QC G4R 5B7, Canada
| |
Collapse
|
25
|
Grushko S, Vysocký A, Heczko D, Bobovský Z. Intuitive Spatial Tactile Feedback for Better Awareness about Robot Trajectory during Human-Robot Collaboration. Sensors (Basel) 2021; 21:s21175748. [PMID: 34502639 PMCID: PMC8434014 DOI: 10.3390/s21175748] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 08/19/2021] [Accepted: 08/25/2021] [Indexed: 11/16/2022]
Abstract
In this work, we extend the previously proposed approach of improving mutual perception during human-robot collaboration by communicating the robot's motion intentions and status to a human worker using hand-worn haptic feedback devices. The improvement is presented by introducing spatial tactile feedback, which provides the human worker with more intuitive information about the currently planned robot's trajectory, given its spatial configuration. The enhanced feedback devices communicate directional information through activation of six tactors spatially organised to represent an orthogonal coordinate frame: the vibration activates on the side of the feedback device that is closest to the future path of the robot. To test the effectiveness of the improved human-machine interface, two user studies were prepared and conducted. The first study aimed to quantitatively evaluate the ease of differentiating activation of individual tactors of the notification devices. The second user study aimed to assess the overall usability of the enhanced notification mode for improving human awareness about the planned trajectory of a robot. The results of the first experiment allowed to identify the tactors for which vibration intensity was most often confused by users. The results of the second experiment showed that the enhanced notification system allowed the participants to complete the task faster and, in general, improved user awareness of the robot's movement plan, according to both objective and subjective data. Moreover, the majority of participants (82%) favoured the improved notification system over its previous non-directional version and vision-based inspection.
Collapse
|
26
|
Slovák J, Melicher M, Šimovec M, Vachálek J. Vision and RTLS Safety Implementation in an Experimental Human-Robot Collaboration Scenario. Sensors (Basel) 2021; 21:2419. [PMID: 33915798 DOI: 10.3390/s21072419] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 03/24/2021] [Accepted: 03/26/2021] [Indexed: 11/17/2022]
Abstract
Human–robot collaboration is becoming ever more widespread in industry because of its adaptability. Conventional safety elements are used when converting a workplace into a collaborative one, although new technologies are becoming more widespread. This work proposes a safe robotic workplace that can adapt its operation and speed depending on the surrounding stimuli. The benefit lies in its use of promising technologies that combine safety and collaboration. Using a depth camera operating on the passive stereo principle, safety zones are created around the robotic workplace, while objects moving around the workplace are identified, including their distance from the robotic system. Passive stereo employs two colour streams that enable distance computation based on pixel shift. The colour stream is also used in the human identification process. Human identification is achieved using the Histogram of Oriented Gradients, pre-learned precisely for this purpose. The workplace also features autonomous trolleys for material supply. Unequivocal trolley identification is achieved using a real-time location system through tags placed on each trolley. The robotic workplace’s speed and the halting of its work depend on the positions of objects within safety zones. The entry of a trolley with an exception to a safety zone does not affect the workplace speed. This work simulates individual scenarios that may occur at a robotic workplace with an emphasis on compliance with safety measures. The novelty lies in the integration of a real-time location system into a vision-based safety system, which are not new technologies by themselves, but their interconnection to achieve exception handling in order to reduce downtimes in the collaborative robotic system is innovative.
Collapse
|
27
|
Kim H, Yang W. Variable Admittance Control Based on Human-Robot Collaboration Observer Using Frequency Analysis for Sensitive and Safe Interaction. Sensors (Basel) 2021; 21:1899. [PMID: 33800522 DOI: 10.3390/s21051899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 02/27/2021] [Accepted: 03/04/2021] [Indexed: 11/17/2022]
Abstract
A collaborative robot should be sensitive to the user intention while maintaining safe interaction during tasks such as hand guiding. Observers based on the discrete Fourier transform have been studied to distinguish between the low-frequency motion elicited by the operator and high-frequency behavior resulting from system instability and disturbances. However, the discrete Fourier transform requires an excessively long sampling time. We propose a human–robot collaboration observer based on an infinite impulse response filter to increase the intention recognition speed. By using this observer, we also propose a variable admittance controller to ensure safe collaboration. The recognition speed of the human–robot collaboration observer is 0.29 s, being 3.5 times faster than frequency analysis based on the discrete Fourier transform. The performance of the variable admittance controller and its improved recognition speed are experimentally verified on a two-degrees-of-freedom manipulator. We confirm that the improved recognition speed of the proposed human–robot collaboration observer allows us to timely recover from unsafe to safe collaboration.
Collapse
|
28
|
Pang G, Deng J, Wang F, Zhang J, Pang Z, Yang G. Development of Flexible Robot Skin for Safe and Natural Human⁻Robot Collaboration. Micromachines (Basel) 2018; 9:E576. [PMID: 30400665 DOI: 10.3390/mi9110576] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 10/31/2018] [Accepted: 11/03/2018] [Indexed: 12/27/2022]
Abstract
For industrial manufacturing, industrial robots are required to work together with human counterparts on certain special occasions, where human workers share their skills with robots. Intuitive human–robot interaction brings increasing safety challenges, which can be properly addressed by using sensor-based active control technology. In this article, we designed and fabricated a three-dimensional flexible robot skin made by the piezoresistive nanocomposite based on the need for enhancement of the security performance of the collaborative robot. The robot skin endowed the YuMi robot with a tactile perception like human skin. The developed sensing unit in the robot skin showed the one-to-one correspondence between force input and resistance output (percentage change in impedance) in the range of 0–6.5 N. Furthermore, the calibration result indicated that the developed sensing unit is capable of offering a maximum force sensitivity (percentage change in impedance per Newton force) of 18.83% N−1 when loaded with an external force of 6.5 N. The fabricated sensing unit showed good reproducibility after loading with cyclic force (0–5.5 N) under a frequency of 0.65 Hz for 3500 cycles. In addition, to suppress the bypass crosstalk in robot skin, we designed a readout circuit for sampling tactile data. Moreover, experiments were conducted to estimate the contact/collision force between the object and the robot in a real-time manner. The experiment results showed that the implemented robot skin can provide an efficient approach for natural and secure human–robot interaction.
Collapse
|
29
|
Haji Fathaliyan A, Wang X, Santos VJ. Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human-Robot Collaboration. Front Robot AI 2018; 5:25. [PMID: 33500912 PMCID: PMC7805858 DOI: 10.3389/frobt.2018.00025] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 03/01/2018] [Indexed: 11/25/2022] Open
Abstract
Human-robot collaboration could be advanced by facilitating the intuitive, gaze-based control of robots, and enabling robots to recognize human actions, infer human intent, and plan actions that support human goals. Traditionally, gaze tracking approaches to action recognition have relied upon computer vision-based analyses of two-dimensional egocentric camera videos. The objective of this study was to identify useful features that can be extracted from three-dimensional (3D) gaze behavior and used as inputs to machine learning algorithms for human action recognition. We investigated human gaze behavior and gaze-object interactions in 3D during the performance of a bimanual, instrumental activity of daily living: the preparation of a powdered drink. A marker-based motion capture system and binocular eye tracker were used to reconstruct 3D gaze vectors and their intersection with 3D point clouds of objects being manipulated. Statistical analyses of gaze fixation duration and saccade size suggested that some actions (pouring and stirring) may require more visual attention than other actions (reach, pick up, set down, and move). 3D gaze saliency maps, generated with high spatial resolution for six subtasks, appeared to encode action-relevant information. The "gaze object sequence" was used to capture information about the identity of objects in concert with the temporal sequence in which the objects were visually regarded. Dynamic time warping barycentric averaging was used to create a population-based set of characteristic gaze object sequences that accounted for intra- and inter-subject variability. The gaze object sequence was used to demonstrate the feasibility of a simple action recognition algorithm that utilized a dynamic time warping Euclidean distance metric. Averaged over the six subtasks, the action recognition algorithm yielded an accuracy of 96.4%, precision of 89.5%, and recall of 89.2%. This level of performance suggests that the gaze object sequence is a promising feature for action recognition whose impact could be enhanced through the use of sophisticated machine learning classifiers and algorithmic improvements for real-time implementation. Robots capable of robust, real-time recognition of human actions during manipulation tasks could be used to improve quality of life in the home and quality of work in industrial environments.
Collapse
Affiliation(s)
| | | | - Veronica J. Santos
- Biomechatronics Laboratory, Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
30
|
Abstract
Robots collaborating naturally with a human partner in a confined workspace need to understand and predict human motions. For understanding, a model-based approach is required as the human motor control system relies on the biomechanical properties to control and execute actions. The model-based control models explain human motions descriptively, which in turn enables predicting and analyzing human movement behaviors. In motor control, reaching motions are framed as an optimization problem. However, different optimality criteria predict disparate motion behavior. Therefore, the inverse problem-finding the optimality criterion from a given arm motion trajectory-is not unique. This paper implements an inverse optimal control (IOC) approach to determine the combination of cost functions that governs a motion execution. The results indicate that reaching motions depend on a trade-off between kinematics and dynamics related cost functions. However, the computational efficiency is not sufficient for online prediction to be utilized for HRI. In order to predict human reaching motions with high efficiency and accuracy, we combine the IOC approach with a probabilistic movement primitives formulation. This hybrid model allows an online-capable prediction while taking into account motor variability and the interpersonal differences. The proposed framework affords a descriptive and a generative model of human reaching motions which can be effectively utilized online for human-in-the-loop robot control and task execution.
Collapse
Affiliation(s)
- Ozgur S Oguz
- Department of Electrical and Computer Engineering (EI), Technical University of Munich (TUM), Munich, Germany
| | - Zhehua Zhou
- Department of Electrical and Computer Engineering (EI), Technical University of Munich (TUM), Munich, Germany
| | - Dirk Wollherr
- Department of Electrical and Computer Engineering (EI), Technical University of Munich (TUM), Munich, Germany
| |
Collapse
|