1
|
Lakhnati Y, Pascher M, Gerken J. Exploring a GPT-based large language model for variable autonomy in a VR-based human-robot teaming simulation. Front Robot AI 2024; 11:1347538. [PMID: 38633059 PMCID: PMC11021771 DOI: 10.3389/frobt.2024.1347538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/13/2024] [Indexed: 04/19/2024] Open
Abstract
In a rapidly evolving digital landscape autonomous tools and robots are becoming commonplace. Recognizing the significance of this development, this paper explores the integration of Large Language Models (LLMs) like Generative pre-trained transformer (GPT) into human-robot teaming environments to facilitate variable autonomy through the means of verbal human-robot communication. In this paper, we introduce a novel simulation framework for such a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting. This system allows users to interact with simulated robot agents through natural language, each powered by individual GPT cores. By means of OpenAI's function calling, we bridge the gap between unstructured natural language input and structured robot actions. A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a simulated multi-robot environment. Our findings suggest that users may have preconceived expectations on how to converse with robots and seldom try to explore the actual language and cognitive capabilities of their simulated robot collaborators. Still, those users who did explore were able to benefit from a much more natural flow of communication and human-like back-and-forth. We provide a set of lessons learned for future research and technical implementations of similar systems.
Collapse
Affiliation(s)
- Younes Lakhnati
- Inclusive Human-Robot-Interaction, TU Dortmund University, Dortmund, NW, Germany
| | - Max Pascher
- Inclusive Human-Robot-Interaction, TU Dortmund University, Dortmund, NW, Germany
- Human-Computer Interaction, University of Duisburg-Essen, Essen, NW, Germany
| | - Jens Gerken
- Inclusive Human-Robot-Interaction, TU Dortmund University, Dortmund, NW, Germany
| |
Collapse
|
2
|
Hack M, Drăgulin B, Hack L, ElSaafin M, Dumitrescu I, Stan D, Păcurar M. Comparative study on the results of orthodontic diagnostics by using algorithms generated by Artificial Intelligence and simple algorithms. Med Pharm Rep 2024; 97:215-221. [PMID: 38746029 PMCID: PMC11090271 DOI: 10.15386/mpr-2702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 01/28/2024] [Accepted: 02/09/2024] [Indexed: 05/16/2024] Open
Abstract
Introduction Artificial intelligence (AI) is computer-generated intelligence, as opposed to the natural intelligence of humans and some animals. Kaplan and Haenlein define AI as "the ability of a system to correctly interpret external data, to learn from such data and use what it has learned to achieve specific goals and tasks through a flexible adaptation". The term "artificial intelligence" is used colloquially to describe machines that mimic the "cognitive" functions that people associate with other human minds. One of the areas where technological advances have brought significant changes is orthodontics, especially in terms of diagnosis and orthodontic prediction.The aim of this study is to conduct a comparative analysis between the results obtained by using the complete algorithms that define Artificial Intelligence and the simple algorithms of classical medical software, used in the detection of the position and shape of teeth in various orthodontic anomalies. Methods A group of 45 patients with maxillary-dento anomalies Angle Class I (DDM with crowding and deviation of the superior inter-incisive line) was studied. Two types of algorithms were used in the study group: modern type I algorithms and simple algorithms used in classical software to detect the position of the frontal teeth. Through the symmetrical points of the face the facial axes were determined, and after the detection of the contour of each tooth the incisional curve was calculated. The median line was analyzed against the vertical axis of the face, and the incisional curve towards the horizontal axis. Results The study shows that AI algorithms offer an increased level of tooth position detection, compared to traditional softwares. Complex algorithms, specific to Artificial Intelligence, showed superior detection, and more stability in the analysis. Conclusion Technological evolution and the development of machine learning capabilities have opened new perspectives in guiding orthodontic treatments through artificial intelligence (AI).
Collapse
Affiliation(s)
- Marius Hack
- G.E Palade University of Medicine, Pharmacy, Science and Technology, Targu Mures, Romania
| | | | | | - Mahmoud ElSaafin
- Orthodontic Department, Faculty of Dentistry, G.E Palade University of Medicine, Pharmacy, Science and Technology, Targu Mures, Romania
| | - Iulia Dumitrescu
- Orthodontic Department, Faculty of Dentistry, G.E Palade University of Medicine, Pharmacy, Science and Technology, Targu Mures, Romania
| | | | - Mariana Păcurar
- Orthodontic Department, Faculty of Dentistry, G.E Palade University of Medicine, Pharmacy, Science and Technology, Targu Mures, Romania
| |
Collapse
|
3
|
Siebinga O, Zgonnikov A, Abbink DA. Modelling communication-enabled traffic interactions. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230537. [PMID: 37234489 PMCID: PMC10206467 DOI: 10.1098/rsos.230537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 04/26/2023] [Indexed: 05/28/2023]
Abstract
A major challenge for autonomous vehicles is handling interactions with human-driven vehicles-for example, in highway merging. A better understanding and computational modelling of human interactive behaviour could help address this challenge. However, existing modelling approaches predominantly neglect communication between drivers and assume that one modelled driver in the interaction responds to the other, but does not actively influence their behaviour. Here, we argue that addressing these two limitations is crucial for the accurate modelling of interactions. We propose a new computational framework addressing these limitations. Similar to game-theoretic approaches, we model a joint interactive system rather than an isolated driver who only responds to their environment. Contrary to game theory, our framework explicitly incorporates communication between two drivers and bounded rationality in each driver's behaviours. We demonstrate our model's potential in a simplified merging scenario of two vehicles, illustrating that it generates plausible interactive behaviour (e.g. aggressive and conservative merging). Furthermore, human-like gap-keeping behaviour emerged in a car-following scenario directly from risk perception without the explicit implementation of time or distance gaps in the model's decision-making. These results suggest that our framework is a promising approach to interaction modelling that can support the development of interaction-aware autonomous vehicles.
Collapse
Affiliation(s)
- O. Siebinga
- Department of Cognitive Robotics, Delft University of Technology Mekelweg 2, Delft, The Netherlands
| | - A. Zgonnikov
- Department of Cognitive Robotics, Delft University of Technology Mekelweg 2, Delft, The Netherlands
| | - D. A. Abbink
- Department of Cognitive Robotics, Delft University of Technology Mekelweg 2, Delft, The Netherlands
| |
Collapse
|
4
|
User Profiling to Enhance Clinical Assessment and Human-Robot Interaction: A Feasibility Study. Int J Soc Robot 2023; 15:501-516. [PMID: 35846164 PMCID: PMC9266091 DOI: 10.1007/s12369-022-00901-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/07/2022] [Indexed: 11/18/2022]
Abstract
Socially Assistive Robots (SARs) are designed to support us in our daily life as a companion, and assistance but also to support the caregivers' work. SARs should show personalized and human-like behavior to improve their acceptance and, consequently, their use. Additionally, they should be trustworthy by caregivers and professionals to be used as support for their work (e.g. objective assessment, decision support tools). In this context the aim of the paper is dual. Firstly, this paper aims to present and discuss the robot behavioral model based on sensing, perception, decision support, and interaction modules. The novel idea behind the proposed model is to extract and use the same multimodal features set for two purposes: (i) to profile the user, so to be used by the caregiver as a decision support tool for the assessment and monitoring of the patient; (ii) to fine-tune the human-robot interaction if they can be correlated to the social cues. Secondly, this paper aims to test in a real environment the proposed model using a SAR robot, namely ASTRO. Particularly, it measures the body posture, the gait cycle, and the handgrip strength during the walking support task. Those collected data were analyzed to assess the clinical profile and to fine-tune the physical interaction. Ten older people (65.2 ± 15.6 years) were enrolled for this study and were asked to walk with ASTRO at their normal speed for 10 m. The obtained results underline a good estimation (p < 0.05) of gait parameters, handgrip strength, and angular excursion of the torso with respect to most used instruments. Additionally, the sensory outputs were combined in the perceptual model to profile the user using non-classical and unsupervised techniques for dimensionality reduction namely T-distributed Stochastic Neighbor Embedding (t-SNE) and non-classic multidimensional scaling (nMDS). Indeed, these methods can group the participants according to their residual walking abilities.
Collapse
|
5
|
Schleidgen S, Friedrich O. Joint Interaction and Mutual Understanding in Social Robotics. SCIENCE AND ENGINEERING ETHICS 2022; 28:48. [PMID: 36289139 PMCID: PMC9606022 DOI: 10.1007/s11948-022-00407-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 09/17/2022] [Indexed: 06/16/2023]
Abstract
Social robotics aims at designing robots capable of joint interaction with humans. On a conceptual level, sufficient mutual understanding is usually said to be a necessary condition for joint interaction. Against this background, the following questions remain open: in which sense is it legitimate to speak of human-robot joint interaction? What exactly does it mean to speak of humans and robots sufficiently understanding each other to account for human-robot joint interaction? Is such joint interaction effectively possible by reference, e.g., to the mere ascription or simulation of understanding? To answer these questions, we first discuss technical approaches which aim at the implementation of certain aspects of human-human communication and interaction in social robots in order to make robots accessible and understandable to humans and, hence, human-robot joint interaction possible. Second, we examine the human tendency to anthropomorphize in this context, with a view to human understanding of and joint interaction with social robots. Third, we analyze the most prominent concepts of mutual understanding and their implications for human-robot joint interaction. We conclude that it is-at least for the time being-not legitimate to speak of human-robot joint interaction, which has relevant implications both morally and ethically.
Collapse
Affiliation(s)
- Sebastian Schleidgen
- FernUniversität in Hagen, Institute of Philosophy, Universitätsstrasse 33, 58097, Hagen, Germany.
| | - Orsolya Friedrich
- FernUniversität in Hagen, Institute of Philosophy, Universitätsstrasse 33, 58097, Hagen, Germany
| |
Collapse
|
6
|
Kerzel M, Ambsdorf J, Becker D, Lu W, Strahl E, Spisak J, Gäde C, Weber T, Wermter S. What’s on Your Mind, NICO? KUNSTLICHE INTELLIGENZ 2022. [DOI: 10.1007/s13218-022-00772-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
AbstractExplainable AI has become an important field of research on neural machine learning models. However, most existing methods are designed as tools that provide expert users with additional insights into their models. In contrast, in human-robot interaction scenarios, non-expert users are frequently confronted with complex, embodied AI systems whose inner workings are unknown. Therefore, eXplainable Human-Robot Interaction (XHRI) should leverage the user’s intuitive ability to collaborate and to use efficient communication. Using NICO, the Neuro-Inspired COmpanion, as a use-case study, we propose an XHRI framework and show how different types of explanations enhance the interaction experience. These explanations range from (a) non-verbal cues for simple and intuitive feedback of inner states via (b) comprehensive verbal explanations of the robot’s intentions, knowledge and reasoning to (c) multimodal explanations using visualizations, speech and text. We revisit past HRI-related studies conducted with NICO and analyze them with the proposed framework. Furthermore, we present two novel XHRI approaches to extract suitable verbal and multimodal explanations from neural network modules in an HRI scenario.
Collapse
|
7
|
Matarese M, Rea F, Sciutti A. Perception is Only Real When Shared: A Mathematical Model for Collaborative Shared Perception in Human-Robot Interaction. Front Robot AI 2022; 9:733954. [PMID: 35783020 PMCID: PMC9240641 DOI: 10.3389/frobt.2022.733954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 05/30/2022] [Indexed: 11/15/2022] Open
Abstract
Partners have to build a shared understanding of their environment in everyday collaborative tasks by aligning their perceptions and establishing a common ground. This is one of the aims of shared perception: revealing characteristics of the individual perception to others with whom we share the same environment. In this regard, social cognitive processes, such as joint attention and perspective-taking, form a shared perception. From a Human-Robot Interaction (HRI) perspective, robots would benefit from the ability to establish shared perception with humans and a common understanding of the environment with their partners. In this work, we wanted to assess whether a robot, considering the differences in perception between itself and its partner, could be more effective in its helping role and to what extent this improves task completion and the interaction experience. For this purpose, we designed a mathematical model for a collaborative shared perception that aims to maximise the collaborators’ knowledge of the environment when there are asymmetries in perception. Moreover, we instantiated and tested our model via a real HRI scenario. The experiment consisted of a cooperative game in which participants had to build towers of Lego bricks, while the robot took the role of a suggester. In particular, we conducted experiments using two different robot behaviours. In one condition, based on shared perception, the robot gave suggestions by considering the partners’ point of view and using its inference about their common ground to select the most informative hint. In the other condition, the robot just indicated the brick that would have yielded a higher score from its individual perspective. The adoption of shared perception in the selection of suggestions led to better performances in all the instances of the game where the visual information was not a priori common to both agents. However, the subjective evaluation of the robot’s behaviour did not change between conditions.
Collapse
Affiliation(s)
- Marco Matarese
- DIBRIS Department, University of Genoa, Genoa, Italy
- RBCS Unit, Italian Institute of Technology, Genoa, Italy
- *Correspondence: Marco Matarese,
| | - Francesco Rea
- RBCS Unit, Italian Institute of Technology, Genoa, Italy
| | | |
Collapse
|
8
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
9
|
Human-robot collaboration: A multilevel and integrated leadership framework. THE LEADERSHIP QUARTERLY 2022. [DOI: 10.1016/j.leaqua.2021.101594] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
10
|
Kathleen B, Víctor FC, Amandine M, Aurélie C, Elisabeth P, Michèle G, Rachid A, Hélène C. Addressing joint action challenges in HRI: Insights from psychology and philosophy. Acta Psychol (Amst) 2022; 222:103476. [PMID: 34974283 DOI: 10.1016/j.actpsy.2021.103476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/19/2021] [Accepted: 12/15/2021] [Indexed: 11/24/2022] Open
Abstract
The vast expansion of research in human-robot interactions (HRI) these last decades has been accompanied by the design of increasingly skilled robots for engaging in joint actions with humans. However, these advances have encountered significant challenges to ensure fluent interactions and sustain human motivation through the different steps of joint action. After exploring current literature on joint action in HRI, leading to a more precise definition of these challenges, the present article proposes some perspectives borrowed from psychology and philosophy showing the key role of communication in human interactions. From mutual recognition between individuals to the expression of commitment and social expectations, we argue that communicative cues can facilitate coordination, prediction, and motivation in the context of joint action. The description of several notions thus suggests that some communicative capacities can be implemented in the context of joint action for HRI, leading to an integrated perspective of robotic communication.
Collapse
Affiliation(s)
- Belhassein Kathleen
- CLLE, UMR5263, Toulouse University, CNRS, UT2J, France; LAAS-CNRS, UPR8001, Toulouse University, CNRS, France
| | | | | | | | | | | | - Alami Rachid
- LAAS-CNRS, UPR8001, Toulouse University, CNRS, France
| | - Cochet Hélène
- CLLE, UMR5263, Toulouse University, CNRS, UT2J, France
| |
Collapse
|
11
|
Kneer M. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents. Cogn Sci 2021; 45:e13032. [PMID: 34606119 PMCID: PMC9285490 DOI: 10.1111/cogs.13032] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 05/29/2021] [Accepted: 07/12/2021] [Indexed: 11/29/2022]
Abstract
The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.
Collapse
Affiliation(s)
- Markus Kneer
- Center for Ethics, Department of Philosophy, University of Zurich.,Digital Society Initiative, University of Zurich
| |
Collapse
|
12
|
Selvaggio M, Cognetti M, Nikolaidis S, Ivaldi S, Siciliano B. Autonomy in Physical Human-Robot Interaction: A Brief Survey. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3100603] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
13
|
Funkhouser E. Evolutionary psychology, learning, and belief signaling: design for natural and artificial systems. SYNTHESE 2021; 199:14097-14119. [PMID: 34565916 PMCID: PMC8449699 DOI: 10.1007/s11229-021-03412-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 09/09/2021] [Indexed: 06/13/2023]
Abstract
Recent work in the cognitive sciences has argued that beliefs sometimes acquire signaling functions in virtue of their ability to reveal information that manipulates "mindreaders." This paper sketches some of the evolutionary and design considerations that could take agents from solipsistic goal pursuit to beliefs that serve as social signals. Such beliefs will be governed by norms besides just the traditional norms of epistemology (e.g., truth and rational support). As agents become better at detecting the agency of others, either through evolutionary history or individual learning, the candidate pool for signaling expands. This logic holds for natural and artificial agents that find themselves in recurring social situations that reward the sharing of one's thoughts.
Collapse
Affiliation(s)
- Eric Funkhouser
- Department of Philosophy, University of Arkansas, Fayetteville, AR USA
| |
Collapse
|
14
|
Abstract
The explainability of robotic systems depends on people’s ability to reliably attribute perceptual beliefs to robots, i.e., what robots know (or believe) about objects and events in the world based on their perception. However, the perceptual systems of robots are not necessarily well understood by the majority of people interacting with them. In this article, we explain why this is a significant, difficult, and unique problem in social robotics. The inability to judge what a robot knows (and does not know) about the physical environment it shares with people gives rise to a host of communicative and interactive issues, including difficulties to communicate about objects or adapt to events in the environment. The challenge faced by social robotics researchers or designers who want to facilitate appropriate attributions of perceptual beliefs to robots is to shape human–robot interactions so that people understand what robots know about objects and events in the environment. To meet this challenge, we argue, it is necessary to advance our knowledge of when and why people form incorrect or inadequate mental models of robots’ perceptual and cognitive mechanisms. We outline a general approach to studying this empirically and discuss potential solutions to the problem.
Collapse
|
15
|
Abstract
Social robots that can interact and communicate with people are growing in popularity for use at home and in customer-service, education, and healthcare settings. Although growing evidence suggests that co-operative and emotionally aligned social robots could benefit users across the lifespan, controversy continues about the ethical implications of these devices and their potential harms. In this perspective, we explore this balance between benefit and risk through the lens of human-robot relationships. We review the definitions and purposes of social robots, explore their philosophical and psychological status, and relate research on human-human and human-animal relationships to the emerging literature on human-robot relationships. Advocating a relational rather than essentialist view, we consider the balance of benefits and harms that can arise from different types of relationship with social robots and conclude by considering the role of researchers in understanding the ethical and societal impacts of social robotics.
Collapse
Affiliation(s)
- Tony J. Prescott
- Department of Computer Science, University of Sheffield, Sheffield, UK
| | | |
Collapse
|
16
|
Guo H, Pu X, Chen J, Meng Y, Yeh MH, Liu G, Tang Q, Chen B, Liu D, Qi S, Wu C, Hu C, Wang J, Wang ZL. A highly sensitive, self-powered triboelectric auditory sensor for social robotics and hearing aids. Sci Robot 2021; 3:3/20/eaat2516. [PMID: 33141730 DOI: 10.1126/scirobotics.aat2516] [Citation(s) in RCA: 174] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Accepted: 07/03/2018] [Indexed: 12/14/2022]
Abstract
The auditory system is the most efficient and straightforward communication strategy for connecting human beings and robots. Here, we designed a self-powered triboelectric auditory sensor (TAS) for constructing an electronic auditory system and an architecture for an external hearing aid in intelligent robotic applications. Based on newly developed triboelectric nanogenerator (TENG) technology, the TAS showed ultrahigh sensitivity (110 millivolts/decibel). A TAS with the broadband response from 100 to 5000 hertz was achieved by designing the annular or sectorial inner boundary architecture with systematic optimization. When incorporated with intelligent robotic devices, TAS demonstrated high-quality music recording and accurate voice recognition for realizing intelligent human-robot interaction. Furthermore, the tunable resonant frequency of TAS was achieved by adjusting the geometric design of inner boundary architecture, which could be used to amplify a specific sound wave naturally. On the basis of this unique property, we propose a hearing aid with the TENG technique, which can simplify the signal processing circuit and reduce the power consuming. This work expresses notable advantages of using TENG technology to build a new generation of auditory systems for meeting the challenges in social robotics.
Collapse
Affiliation(s)
- Hengyu Guo
- Department of Applied Physics, State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, P. R. China.,Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing 100083, P. R. China.,School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Xianjie Pu
- Department of Applied Physics, State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, P. R. China
| | - Jie Chen
- Department of Applied Physics, State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, P. R. China
| | - Yan Meng
- Department of Applied Physics, State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, P. R. China
| | - Min-Hsin Yeh
- Department of Chemical Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan
| | - Guanlin Liu
- Department of Applied Physics, State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, P. R. China.,Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing 100083, P. R. China
| | - Qian Tang
- Department of Applied Physics, State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, P. R. China
| | - Baodong Chen
- Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing 100083, P. R. China
| | - Di Liu
- Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing 100083, P. R. China
| | - Song Qi
- Department of Applied Physics, State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, P. R. China
| | - Changsheng Wu
- School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Chenguo Hu
- Department of Applied Physics, State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, P. R. China.
| | - Jie Wang
- Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing 100083, P. R. China.
| | - Zhong Lin Wang
- Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing 100083, P. R. China. .,School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
17
|
Chen B, Vondrick C, Lipson H. Visual behavior modelling for robotic theory of mind. Sci Rep 2021; 11:424. [PMID: 33431917 PMCID: PMC7801744 DOI: 10.1038/s41598-020-77918-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 11/18/2020] [Indexed: 11/22/2022] Open
Abstract
Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametric sensory inputs and built-in knowledge relevant to a given task. Here, we propose that an observer can model the behavior of an actor through visual processing alone, without any prior symbolic information and assumptions about relevant inputs. To test this hypothesis, we designed a non-verbal non-symbolic robotic experiment in which an observer must visualize future plans of an actor robot, based only on an image depicting the initial scene of the actor robot. We found that an AI-observer is able to visualize the future plans of the actor with 98.5% success across four different activities, even when the activity is not known a-priori. We hypothesize that such visual behavior modeling is an essential cognitive ability that will allow machines to understand and coordinate with surrounding agents, while sidestepping the notorious symbol grounding problem. Through a false-belief test, we suggest that this approach may be a precursor to Theory of Mind, one of the distinguishing hallmarks of primate social cognition.
Collapse
Affiliation(s)
- Boyuan Chen
- Computer Science, Columbia University, Mudd 535, 500 W 120 St, New York, NY, 10027, USA.
| | - Carl Vondrick
- Computer Science, Columbia University, Mudd 535, 500 W 120 St, New York, NY, 10027, USA
- , 611 CEPSR, 530W 120 St, New York, NY, 10027, USA
| | - Hod Lipson
- Mechanical Engineering, Columbia University, Mudd 535E, 500 W 120 St, New York, NY, 10027, USA
- Data Science, Columbia University, New York, NY, 10027, USA
| |
Collapse
|
18
|
Yang GZ, Bellingham J, Dupont PE, Fischer P, Floridi L, Full R, Jacobstein N, Kumar V, McNutt M, Merrifield R, Nelson BJ, Scassellati B, Taddeo M, Taylor R, Veloso M, Wang ZL, Wood R. The grand challenges of Science Robotics. Sci Robot 2021; 3:3/14/eaar7650. [PMID: 33141701 DOI: 10.1126/scirobotics.aar7650] [Citation(s) in RCA: 351] [Impact Index Per Article: 117.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Accepted: 01/12/2018] [Indexed: 12/17/2022]
Abstract
One of the ambitions of Science Robotics is to deeply root robotics research in science while developing novel robotic platforms that will enable new scientific discoveries. Of our 10 grand challenges, the first 7 represent underpinning technologies that have a wider impact on all application areas of robotics. For the next two challenges, we have included social robotics and medical robotics as application-specific areas of development to highlight the substantial societal and health impacts that they will bring. Finally, the last challenge is related to responsible innovation and how ethics and security should be carefully considered as we develop the technology further.
Collapse
Affiliation(s)
- Guang-Zhong Yang
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK.
| | - Jim Bellingham
- Center for Marine Robotics, Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA
| | - Pierre E Dupont
- Department of Cardiovascular Surgery, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Peer Fischer
- Institute of Physical Chemistry, University of Stuttgart, Stuttgart, Germany.,Micro, Nano, and Molecular Systems Laboratory, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Luciano Floridi
- Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK.,Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK.,Department of Computer Science, University of Oxford, Oxford, UK.,Data Ethics Group, Alan Turing Institute, London, UK.,Department of Economics, American University, Washington, DC 20016, USA
| | - Robert Full
- Department of Integrative Biology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Neil Jacobstein
- Singularity University, NASA Research Park, Moffett Field, CA 94035, USA.,MediaX, Stanford University, Stanford, CA 94305, USA
| | - Vijay Kumar
- Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marcia McNutt
- National Academy of Sciences, Washington, DC 20418, USA
| | - Robert Merrifield
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK
| | - Bradley J Nelson
- Institute of Robotics and Intelligent Systems, Department of Mechanical and Process Engineering, ETH Zürich, Zurich, Switzerland
| | - Brian Scassellati
- Department of Computer Science, Yale University, New Haven, CT 06520, USA.,Department Mechanical Engineering and Materials Science, Yale University, New Haven, CT 06520, USA
| | - Mariarosaria Taddeo
- Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK.,Department of Computer Science, University of Oxford, Oxford, UK.,Data Ethics Group, Alan Turing Institute, London, UK
| | - Russell Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Manuela Veloso
- Machine Learning Department, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Zhong Lin Wang
- School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Robert Wood
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA.,Wyss Institute for Biologically Inspired Engineering, Harvard University, Cambridge, MA 02138, USA
| |
Collapse
|
19
|
Zhu Q, Williams T, Jackson B, Wen R. Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective. SCIENCE AND ENGINEERING ETHICS 2020; 26:2511-2526. [PMID: 32632786 DOI: 10.1007/s11948-020-00246-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans' proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot's ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing "moral ecology" of human-robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots.
Collapse
Affiliation(s)
- Qin Zhu
- Division of Humanities, Arts and Social Sciences, Colorado School of Mines, Golden, USA.
| | - Tom Williams
- Department of Computer Science, Colorado School of Mines, Golden, USA
| | - Blake Jackson
- Department of Computer Science, Colorado School of Mines, Golden, USA
| | - Ruchen Wen
- Department of Computer Science, Colorado School of Mines, Golden, USA
| |
Collapse
|
20
|
Inference of Other’s Minds with Limited Information in Evolutionary Robotics. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00660-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
21
|
|
22
|
Marchetti A, Miraglia L, Di Dio C. Toward a Socio-Material Approach to Cognitive Empathy in Autistic Spectrum Disorder. Front Psychol 2020; 10:2965. [PMID: 31998198 PMCID: PMC6967260 DOI: 10.3389/fpsyg.2019.02965] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 12/16/2019] [Indexed: 11/23/2022] Open
Affiliation(s)
- Antonella Marchetti
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy.,Faculty of Education, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Laura Miraglia
- Faculty of Education, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Cinzia Di Dio
- Research Unit on Theory of Mind, Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy.,Faculty of Education, Università Cattolica del Sacro Cuore, Milan, Italy
| |
Collapse
|
23
|
Keshmiri S, Sumioka H, Yamazaki R, Shiomi M, Ishiguro H. Information Content of Prefrontal Cortex Activity Quantifies the Difficulty of Narrated Stories. Sci Rep 2019; 9:17959. [PMID: 31784577 PMCID: PMC6884437 DOI: 10.1038/s41598-019-54280-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Accepted: 11/11/2019] [Indexed: 12/14/2022] Open
Abstract
The ability to realize the individuals' impressions during the verbal communication allows social robots to significantly facilitate their social interactions in such areas as child education and elderly care. However, such impressions are highly subjective and internalized and therefore cannot be easily comprehended through behavioural observations. Although brain-machine interface suggests the utility of the brain information in human-robot interaction, previous studies did not consider its potential for estimating the internal impressions during verbal communication. In this article, we introduce a novel approach to estimation of the individuals' perceived difficulty of stories using the quantified information content of their prefrontal cortex activity. We demonstrate the robustness of our approach by showing its comparable performance in face-to-face, humanoid, speaker, and video-chat settings. Our results contribute to the field of socially assistive robotics by taking a step toward enabling robots determine their human companions' perceived difficulty of conversations, thereby enabling these media to sustain their communication with humans by adapting to individuals' pace and interest in response to conversational nuances and complexity.
Collapse
Affiliation(s)
- Soheil Keshmiri
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan.
| | - Hidenobu Sumioka
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
| | - Ryuji Yamazaki
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Masahiro Shiomi
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
| | - Hiroshi Ishiguro
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
| |
Collapse
|
24
|
Abstract
Abstract
We developed an autonomous human-like guide robot for a science museum. Its identifies individuals, estimates the exhibits at which visitors are looking, and proactively approaches them to provide explanations with gaze autonomously, using our new approach called speak-and-retreat interaction. The robot also performs such relation-building behaviors as greeting visitors by their names and expressing a friendlier attitude to repeat visitors. We conducted a field study in a science museum at which our system basically operated autonomously and the visitors responded quite positively. First-time visitors on average interacted with the robot for about 9 min, and 94.74% expressed a desire to interact with it again in the future.
Repeat visitors noticed its relation-building capability and perceived a closer relationship with it.
Collapse
|
25
|
Keshmiri S, Shiomi M, Shatani K, Minato T, Ishiguro H. Facial Pre-Touch Space Differentiates the Level of Openness Among Individuals. Sci Rep 2019; 9:11924. [PMID: 31417172 PMCID: PMC6695382 DOI: 10.1038/s41598-019-48481-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Accepted: 08/06/2019] [Indexed: 11/08/2022] Open
Abstract
Social and cognitive psychology provide a rich map of our personality landscape. What appears to be unexplored is the correspondence between these findings and our behavioural responses during day-to-day life interaction. In this article, we utilize cluster analysis to show that the individuals' facial pre-touch space can be divided into three well-defined subspaces and that within the first two immediate clusters around the face area such distance information significantly correlate with their openness in the five-factor model (FFM). In these two clusters, we also identify that the individuals' facial pre-touch space can predict their level of openness that are further categorized into six distinct levels with a highly above chance accuracy. Our results suggest that such personality factors as openness are not only reflected in individuals' behavioural responses but also these responses allow for a fine-grained categorization of individuals' personality.
Collapse
Affiliation(s)
- Soheil Keshmiri
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan.
| | - Masahiro Shiomi
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
| | - Kodai Shatani
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Takashi Minato
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
| | - Hiroshi Ishiguro
- Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
| |
Collapse
|
26
|
Multiscale Entropy Quantifies the Differential Effect of the Medium Embodiment on Older Adults Prefrontal Cortex during the Story Comprehension: A Comparative Analysis. ENTROPY 2019; 21:e21020199. [PMID: 33266914 PMCID: PMC7514681 DOI: 10.3390/e21020199] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/15/2019] [Accepted: 02/16/2019] [Indexed: 12/23/2022]
Abstract
Todays’ communication media virtually impact and transform every aspect of our daily communication and yet the extent of their embodiment on our brain is unexplored. The study of this topic becomes more crucial, considering the rapid advances in such fields as socially assistive robotics that envision the use of intelligent and interactive media for providing assistance through social means. In this article, we utilize the multiscale entropy (MSE) to investigate the effect of the physical embodiment on the older people’s prefrontal cortex (PFC) activity while listening to stories. We provide evidence that physical embodiment induces a significant increase in MSE of the older people’s PFC activity and that such a shift in the dynamics of their PFC activation significantly reflects their perceived feeling of fatigue. Our results benefit researchers in age-related cognitive function and rehabilitation who seek for the adaptation of these media in robot-assistive cognitive training of the older people. In addition, they offer a complementary information to the field of human-robot interaction via providing evidence that the use of MSE can enable the interactive learning algorithms to utilize the brain’s activation patterns as feedbacks for improving their level of interactivity, thereby forming a stepping stone for rich and usable human mental model.
Collapse
|
27
|
Boccignone G, Conte D, Cuculo V, D'Amelio A, Grossi G, Lanzarotti R. Deep Construction of an Affective Latent Space via Multimodal Enactment. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2788820] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
28
|
Winfield AFT. Experiments in Artificial Theory of Mind: From Safety to Story-Telling. Front Robot AI 2018; 5:75. [PMID: 33500954 PMCID: PMC7806090 DOI: 10.3389/frobt.2018.00075] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 06/04/2018] [Indexed: 11/13/2022] Open
Abstract
Theory of mind is the term given by philosophers and psychologists for the ability to form a predictive model of self and others. In this paper we focus on synthetic models of theory of mind. We contend firstly that such models—especially when tested experimentally—can provide useful insights into cognition, and secondly that artificial theory of mind can provide intelligent robots with powerful new capabilities, in particular social intelligence for human-robot interaction. This paper advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind. Proposed as a computational model of the simulation theory of mind, our simulation-based internal model equips a robot with an internal model of itself and its environment, including other dynamic actors, which can test (i.e., simulate) the robot's next possible actions and hence anticipate the likely consequences of those actions both for itself and others. Although it falls far short of a full artificial theory of mind, our model does allow us to test several interesting scenarios: in some of these a robot equipped with the internal model interacts with other robots without an internal model, but acting as proxy humans; in others two robots each with a simulation-based internal model interact with each other. We outline a series of experiments which each demonstrate some aspect of artificial theory of mind.
Collapse
Affiliation(s)
- Alan F T Winfield
- Bristol Robotics Laboratory, University of the West of England, Bristol, United Kingdom
| |
Collapse
|
29
|
Seeing mental states: An experimental strategy for measuring the observability of other minds. Phys Life Rev 2018; 24:67-80. [DOI: 10.1016/j.plrev.2017.10.002] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 09/29/2017] [Accepted: 10/01/2017] [Indexed: 02/03/2023]
|
30
|
Anshar M, Williams MA. Evolving robot empathy towards humans with motor disabilities through artificial pain generation. AIMS Neurosci 2018; 5:56-73. [PMID: 32341951 PMCID: PMC7181891 DOI: 10.3934/neuroscience.2018.1.56] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Accepted: 01/15/2018] [Indexed: 11/18/2022] Open
Abstract
In contact assistive robots, a prolonged physical engagement between robots and humans with motor disabilities due to shoulder injuries, for instance, may at times lead humans to experience pain. In this situation, robots will require sophisticated capabilities, such as the ability to recognize human pain in advance and generate counter-responses as follow up emphatic action. Hence, it is important for robots to acquire an appropriate pain concept that allows them to develop these capabilities. This paper conceptualizes empathy generation through the realization of synthetic pain classes integrated into a robot's self-awareness framework, and the implementation of fault detection on the robot body serves as a primary source of pain activation. Projection of human shoulder motion into the robot arm motion acts as a fusion process, which is used as a medium to gather information for analyses then to generate corresponding synthetic pain and emphatic responses. An experiment is designed to mirror a human peer's shoulder motion into an observer robot. The results demonstrate that the fusion takes place accurately whenever unified internal states are achieved, allowing accurate classification of synthetic pain categories and generation of empathy responses in a timely fashion. Future works will consider a pain activation mechanism development.
Collapse
Affiliation(s)
- Muh Anshar
- Social, Cognitive Robotics and Advanced Artificial Intelligent Research Centre, Department of Electrical Engineering, Universitas Hasanuddin UNHAS Makassar Indonesia
| | - Mary-Anne Williams
- Innovation and Enterprise Research Lab, Centre for Artificial Intelligence, University of Technology Sydney UTS Australia
| |
Collapse
|
31
|
|
32
|
Rossi S, Ferland F, Tapus A. User profiling and behavioral adaptation for HRI: A survey. Pattern Recognit Lett 2017. [DOI: 10.1016/j.patrec.2017.06.002] [Citation(s) in RCA: 101] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
33
|
Lee K, Choo H. Constructing Perceptual Common Ground Between Human and Robot Through Joint Attention. INT J HUM ROBOT 2017. [DOI: 10.1142/s0219843617500207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Joint attention is a communicative activity that allows social partners to share perceptual experiences by jointly attending to an environmental object. Unlike the common approach towards joint attention, which is based on the developmental view in robotics, here it is conceptualized with a psychophysical paradigm known as cueing. The triadic interaction of joint attention is formalized as the conditional probability of an attentional response for a given target candidate derived from object features and a cue derived from a human partner's indication. A robotic system to which the joint attention model is applied conducted a series of tasks to demonstrate the properties of the computational model. The robotic system successfully performed the tasks, which could not be specified by the information derived from a target object alone; furthermore, the system demonstrated how perceptual and selection ambiguity is resolved through joint attentive interaction and made to converge into a common perceptual state. The results imply that a perceptual common ground is constructed on the triadic relationship between user, robot, and objects through joint attentive interaction.
Collapse
Affiliation(s)
- Kangwoo Lee
- College of Information and Communication Engineering, Sungkyunkwan University, Chonchon-dong, Jangan-gu, Suwon 440-746, South Korea
| | - Hyunseung Choo
- College of Software, Sungkyunkwan University, Chonchon-dong, Jangan-gu, Suwon 440-746, South Korea
| |
Collapse
|
34
|
Bao Y, Cuijpers RH. On the Imitation of Goal Directed Movements of a Humanoid Robot. Int J Soc Robot 2017. [DOI: 10.1007/s12369-017-0417-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
35
|
Hiatt LM, Narber C, Bekele E, Khemlani SS, Trafton JG. Human modeling for human–robot collaboration. Int J Rob Res 2017. [DOI: 10.1177/0278364917690592] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
| | - Cody Narber
- Naval Research Laboratory, Washington, DC, USA
| | | | | | | |
Collapse
|
36
|
Task Oriented Control of a Humanoid Robot Through the Implementation of a Cognitive Architecture. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-016-0383-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
37
|
Garcia DH, Monje C, Balaguer C. A use case of an adaptive cognitive architecture for the operation of humanoid robots in real environments. INT J ADV ROBOT SYST 2016. [DOI: 10.1177/1729881416678133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Future trends in robotics call for robots that can work, interact and collaborate with humans. Developing these kind of robots requires the development of intelligent behaviours. As a minimum standard for behaviours to be considered as intelligent, it is required at least to present the ability to learn skills, represent skill’s knowledge and adapt and generate new skills. In this work, a cognitive framework is proposed for learning and adapting models of robot skills knowledge. The proposed framework is meant to allow for an operator to teach and demonstrate the robot the motion of a task skill it must reproduce; to build a knowledge base of the learned skills knowledge allowing for its storage, classification and retrieval; to adapt and generate new models of a skill for compliance with the current task constraints. This framework has been implemented in the humanoid robot HOAP-3 and experimental results show the applicability of the approach.
Collapse
Affiliation(s)
- Daniel Hernandez Garcia
- Department of Systems Engineering and Automation, Universidad Carlos III de Madrid, Leganes, Madrid, Spain
| | - Concepcion Monje
- Department of Systems Engineering and Automation, Universidad Carlos III de Madrid, Leganes, Madrid, Spain
| | - Carlos Balaguer
- Department of Systems Engineering and Automation, Universidad Carlos III de Madrid, Leganes, Madrid, Spain
| |
Collapse
|
38
|
Thompson JJ, Sameen N, Racine TP. Methodological consequences of weak embodied cognition and shared intentionality. NEW IDEAS IN PSYCHOLOGY 2016. [DOI: 10.1016/j.newideapsych.2016.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
39
|
|
40
|
Kim KJ, Cho SB. Inference of other's internal neural models from active observation. Biosystems 2015; 128:37-47. [PMID: 25617791 DOI: 10.1016/j.biosystems.2015.01.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Revised: 01/06/2015] [Accepted: 01/19/2015] [Indexed: 11/29/2022]
Abstract
Recently, there have been several attempts to replicate theory of mind, which explains how humans infer the mental states of other people using multiple sensory input, with artificial systems. One example of this is a robot that observes the behavior of other artificial systems and infers their internal models, mapping sensory inputs to the actuator's control signals. In this paper, we present the internal model as an artificial neural network, similar to biological systems. During inference, an observer can use an active incremental learning algorithm to guess an actor's internal neural model. This could significantly reduce the effort needed to guess other people's internal models. We apply an algorithm to the actor-observer robot scenarios with/without prior knowledge of the internal models. To validate our approach, we use a physics-based simulator with virtual robots. A series of experiments reveal that the observer robot can construct an "other's self-model", validating the possibility that a neural-based approach can be used as a platform for learning cognitive functions.
Collapse
Affiliation(s)
- Kyung-Joong Kim
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea.
| | - Sung-Bae Cho
- Department of Computer Science, Yonsei University, Seoul, South Korea.
| |
Collapse
|
41
|
|
42
|
Złotowski J, Proudfoot D, Yogeeswaran K, Bartneck C. Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction. Int J Soc Robot 2014. [DOI: 10.1007/s12369-014-0267-6] [Citation(s) in RCA: 197] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
43
|
Thompson JJ, Sameen N, Bibok MB, Racine TP. Agnosticism gone awry: Why developmental robotics must commit to an understanding of embodiment and shared intentionality. NEW IDEAS IN PSYCHOLOGY 2013. [DOI: 10.1016/j.newideapsych.2013.02.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
44
|
Pandey AK, Ali M, Alami R. Towards a Task-Aware Proactive Sociable Robot Based on Multi-state Perspective-Taking. Int J Soc Robot 2013. [DOI: 10.1007/s12369-013-0181-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
45
|
|
46
|
ISHIGURO HIROSHI, MINATO TAKASHI, YOSHIKAWA YUICHIRO, ASADA MINORU. HUMANOID PLATFORMS FOR COGNITIVE DEVELOPMENTAL ROBOTICS. INT J HUM ROBOT 2012. [DOI: 10.1142/s0219843611002514] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
One of the most promising approaches to understand human cognitive and developmental mechanisms is a synthetic approach using humanoid robots; an approach to understand the human cognitive functions by realizing them with the robots. Humans are so complicated and it is difficult to mimic the well-developed human by robotic technologies. Therefore, it is necessary to understand how humans develop the complicated functions during the developmental process. We may be able to develop infant functions and make them evolve by tracing the human developmental process. This new study requires robot platforms that can mimic various aspects of the human developmental process. This paper introduces a series of robot platforms that we have developed for the studies with the synthetic approach.
Collapse
Affiliation(s)
- HIROSHI ISHIGURO
- Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka, 560-8531, Japan
| | - TAKASHI MINATO
- Asada Project, ERATO, Japan Science and Technology Agency, 2-1 Yamada-oka, Suita, Osaka, 565-0871, Japan
| | - YUICHIRO YOSHIKAWA
- Asada Project, ERATO, Japan Science and Technology Agency, 2-1 Yamada-oka, Suita, Osaka, 565-0871, Japan
| | - MINORU ASADA
- Graduate School of Engineering, Osaka University, 2-1 Yamada-oka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
47
|
Sumioka H, Hosoda K, Yoshikawa Y, Asada M. Acquisition of joint attention through natural interaction utilizing motion cues. Adv Robot 2012. [DOI: 10.1163/156855307781035637] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Hidenobu Sumioka
- a Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Koh Hosoda
- b Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan; ERATO, JST, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan
| | | | - Minoru Asada
- d Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan; ERATO, JST, 2-1 Yamadaoka, Suita, Osaka 565-0871, Japan
| |
Collapse
|
48
|
|
49
|
|
50
|
Grounding the Interaction: Anchoring Situated Discourse in Everyday Human-Robot Interaction. Int J Soc Robot 2011. [DOI: 10.1007/s12369-011-0123-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|