1
|
Gweon H, Fan J, Kim B. Socially intelligent machines that learn from humans and help humans learn. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2023; 381:20220048. [PMID: 37271177 DOI: 10.1098/rsta.2022.0048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 04/17/2023] [Indexed: 06/06/2023]
Abstract
A hallmark of human intelligence is the ability to understand and influence other minds. Humans engage in inferential social learning (ISL) by using commonsense psychology to learn from others and help others learn. Recent advances in artificial intelligence (AI) are raising new questions about the feasibility of human-machine interactions that support such powerful modes of social learning. Here, we envision what it means to develop socially intelligent machines that can learn, teach, and communicate in ways that are characteristic of ISL. Rather than machines that simply predict human behaviours or recapitulate superficial aspects of human sociality (e.g. smiling, imitating), we should aim to build machines that can learn from human inputs and generate outputs for humans by proactively considering human values, intentions and beliefs. While such machines can inspire next-generation AI systems that learn more effectively from humans (as learners) and even help humans acquire new knowledge (as teachers), achieving these goals will also require scientific studies of its counterpart: how humans reason about machine minds and behaviours. We close by discussing the need for closer collaborations between the AI/ML and cognitive science communities to advance a science of both natural and artificial intelligence. This article is part of a discussion meeting issue 'Cognitive artificial intelligence'.
Collapse
Affiliation(s)
- Hyowon Gweon
- Department of Psychology, Stanford University, Stanford, CA 94305, USA
| | - Judith Fan
- Department of Psychology, Stanford University, Stanford, CA 94305, USA
- Department of Psychology, University of California, San Diego, CA 92093, USA
| | - Been Kim
- Google Research, Mountain View, CA 94043, USA
| |
Collapse
|
2
|
Trivedi U, Menychtas D, Alqasemi R, Dubey R. Biomimetic Approaches for Human Arm Motion Generation: Literature Review and Future Directions. SENSORS (BASEL, SWITZERLAND) 2023; 23:3912. [PMID: 37112253 PMCID: PMC10143908 DOI: 10.3390/s23083912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 03/25/2023] [Accepted: 04/10/2023] [Indexed: 06/19/2023]
Abstract
In recent years, numerous studies have been conducted to analyze how humans subconsciously optimize various performance criteria while performing a particular task, which has led to the development of robots that are capable of performing tasks with a similar level of efficiency as humans. The complexity of the human body has led researchers to create a framework for robot motion planning to recreate those motions in robotic systems using various redundancy resolution methods. This study conducts a thorough analysis of the relevant literature to provide a detailed exploration of the different redundancy resolution methodologies used in motion generation for mimicking human motion. The studies are investigated and categorized according to the study methodology and various redundancy resolution methods. An examination of the literature revealed a strong trend toward formulating intrinsic strategies that govern human movement through machine learning and artificial intelligence. Subsequently, the paper critically evaluates the existing approaches and highlights their limitations. It also identifies the potential research areas that hold promise for future investigations.
Collapse
Affiliation(s)
- Urvish Trivedi
- Department of Mechanical Engineering, University of South Florida, Tampa, FL 33620, USA; (R.A.); (R.D.)
| | - Dimitrios Menychtas
- Department of Physical Education & Sport Science, Democritus University of Thrace, Panepistimioupoli, 69100 Komotini, Greece;
| | - Redwan Alqasemi
- Department of Mechanical Engineering, University of South Florida, Tampa, FL 33620, USA; (R.A.); (R.D.)
| | - Rajiv Dubey
- Department of Mechanical Engineering, University of South Florida, Tampa, FL 33620, USA; (R.A.); (R.D.)
| |
Collapse
|
3
|
Automatic Aesthetics Evaluation of Robotic Dance Poses Based on Hierarchical Processing Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5827097. [PMID: 36156961 PMCID: PMC9507690 DOI: 10.1155/2022/5827097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 08/28/2022] [Accepted: 09/03/2022] [Indexed: 11/18/2022]
Abstract
Vision plays an important role in the aesthetic cognition of human beings. When creating dance choreography, human dancers, who always observe their own dance poses in a mirror, understand the aesthetics of those poses and aim to improve their dancing performance. In order to develop artificial intelligence, a robot should establish a similar mechanism to imitate the above human dance behaviour. Inspired by this, this paper designs a way for a robot to visually perceive its own dance poses and constructs a novel dataset of dance poses based on real NAO robots. On this basis, this paper proposes a hierarchical processing network-based approach to automatic aesthetics evaluation of robotic dance poses. The hierarchical processing network first extracts the primary visual features by using three parallel CNNs, then uses a synthesis CNN to achieve high-level association and comprehensive processing on the basis of multi-modal feature fusion, and finally makes an automatic aesthetics decision. Notably, the design of this hierarchical processing network is inspired by the research findings in neuroaesthetics. Experimental results show that our approach can achieve a high correct ratio of aesthetic evaluation at 82.3%, which is superior to the existing methods.
Collapse
|
4
|
Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI. Neural Netw 2022; 155:95-118. [DOI: 10.1016/j.neunet.2022.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 07/21/2022] [Accepted: 08/01/2022] [Indexed: 11/20/2022]
|
5
|
Solé R, Seoane LF. Evolution of Brains and Computers: The Roads Not Taken. ENTROPY 2022; 24:e24050665. [PMID: 35626550 PMCID: PMC9141356 DOI: 10.3390/e24050665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 04/28/2022] [Accepted: 05/03/2022] [Indexed: 01/27/2023]
Abstract
When computers started to become a dominant part of technology around the 1950s, fundamental questions about reliable designs and robustness were of great relevance. Their development gave rise to the exploration of new questions, such as what made brains reliable (since neurons can die) and how computers could get inspiration from neural systems. In parallel, the first artificial neural networks came to life. Since then, the comparative view between brains and computers has been developed in new, sometimes unexpected directions. With the rise of deep learning and the development of connectomics, an evolutionary look at how both hardware and neural complexity have evolved or designed is required. In this paper, we argue that important similarities have resulted both from convergent evolution (the inevitable outcome of architectural constraints) and inspiration of hardware and software principles guided by toy pictures of neurobiology. Moreover, dissimilarities and gaps originate from the lack of major innovations that have paved the way to biological computing (including brains) that are completely absent within the artificial domain. As it occurs within synthetic biocomputation, we can also ask whether alternative minds can emerge from A.I. designs. Here, we take an evolutionary view of the problem and discuss the remarkable convergences between living and artificial designs and what are the pre-conditions to achieve artificial intelligence.
Collapse
Affiliation(s)
- Ricard Solé
- ICREA-Complex Systems Lab, Universitat Pompeu Fabra, Dr Aiguader 88, 08003 Barcelona, Spain
- Institut de Biologia Evolutiva, CSIC-UPF, Pg Maritim de la Barceloneta 37, 08003 Barcelona, Spain
- Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
- Correspondence:
| | - Luís F. Seoane
- Departamento de Biología de Sistemas, Centro Nacional de Biotecnología (CSIC), C/Darwin 3, 28049 Madrid, Spain;
- Grupo Interdisciplinar de Sistemas Complejos (GISC), 28049 Madrid, Spain
| |
Collapse
|
6
|
Bolotta S, Dumas G. Social Neuro AI: Social Interaction as the “Dark Matter” of AI. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.846440] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
This article introduces a three-axis framework indicating how AI can be informed by biological examples of social learning mechanisms. We argue that the complex human cognitive architecture owes a large portion of its expressive power to its ability to engage in social and cultural learning. However, the field of AI has mostly embraced a solipsistic perspective on intelligence. We thus argue that social interactions not only are largely unexplored in this field but also are an essential element of advanced cognitive ability, and therefore constitute metaphorically the “dark matter” of AI. In the first section, we discuss how social learning plays a key role in the development of intelligence. We do so by discussing social and cultural learning theories and empirical findings from social neuroscience. Then, we discuss three lines of research that fall under the umbrella of Social NeuroAI and can contribute to developing socially intelligent embodied agents in complex environments. First, neuroscientific theories of cognitive architecture, such as the global workspace theory and the attention schema theory, can enhance biological plausibility and help us understand how we could bridge individual and social theories of intelligence. Second, intelligence occurs in time as opposed to over time, and this is naturally incorporated by dynamical systems. Third, embodiment has been demonstrated to provide more sophisticated array of communicative signals. To conclude, we discuss the example of active inference, which offers powerful insights for developing agents that possess biological realism, can self-organize in time, and are socially embodied.
Collapse
|
7
|
Lomas JD, Lin A, Dikker S, Forster D, Lupetti ML, Huisman G, Habekost J, Beardow C, Pandey P, Ahmad N, Miyapuram K, Mullen T, Cooper P, van der Maden W, Cross ES. Resonance as a Design Strategy for AI and Social Robots. Front Neurorobot 2022; 16:850489. [PMID: 35574227 PMCID: PMC9097027 DOI: 10.3389/fnbot.2022.850489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 03/23/2022] [Indexed: 11/20/2022] Open
Abstract
Resonance, a powerful and pervasive phenomenon, appears to play a major role in human interactions. This article investigates the relationship between the physical mechanism of resonance and the human experience of resonance, and considers possibilities for enhancing the experience of resonance within human-robot interactions. We first introduce resonance as a widespread cultural and scientific metaphor. Then, we review the nature of "sympathetic resonance" as a physical mechanism. Following this introduction, the remainder of the article is organized in two parts. In part one, we review the role of resonance (including synchronization and rhythmic entrainment) in human cognition and social interactions. Then, in part two, we review resonance-related phenomena in robotics and artificial intelligence (AI). These two reviews serve as ground for the introduction of a design strategy and combinatorial design space for shaping resonant interactions with robots and AI. We conclude by posing hypotheses and research questions for future empirical studies and discuss a range of ethical and aesthetic issues associated with resonance in human-robot interactions.
Collapse
Affiliation(s)
- James Derek Lomas
- Department of Human Centered Design, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, Netherlands
| | - Albert Lin
- Center for Human Frontiers, Qualcomm Institute, University of California, San Diego, San Diego, CA, United States
| | - Suzanne Dikker
- Department of Psychology, New York University, New York, NY, United States
- Department of Clinical Psychology, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Deborah Forster
- Center for Human Frontiers, Qualcomm Institute, University of California, San Diego, San Diego, CA, United States
| | - Maria Luce Lupetti
- Department of Human Centered Design, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, Netherlands
| | - Gijs Huisman
- Department of Human Centered Design, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, Netherlands
| | - Julika Habekost
- The Design Lab, California Institute of Information and Communication Technologies, University of California, San Diego, San Diego, CA, United States
| | - Caiseal Beardow
- Department of Human Centered Design, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, Netherlands
| | - Pankaj Pandey
- Centre for Cognitive and Brain Sciences, Indian Institute of Technology, Gandhinagar, India
| | - Nashra Ahmad
- Centre for Cognitive and Brain Sciences, Indian Institute of Technology, Gandhinagar, India
| | - Krishna Miyapuram
- Centre for Cognitive and Brain Sciences, Indian Institute of Technology, Gandhinagar, India
| | - Tim Mullen
- Intheon Labs, San Diego, CA, United States
| | - Patrick Cooper
- Department of Physics, Duquesne University, Pittsburgh, PA, United States
| | - Willem van der Maden
- Department of Human Centered Design, Faculty of Industrial Design Engineering, Delft University of Technology, Delft, Netherlands
| | - Emily S. Cross
- Social Robotics, Institute of Neuroscience and Psychology, School of Computing Science, University of Glasgow, Glasgow, United Kingdom
- SOBA Lab, School of Psychology, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
8
|
Identifying Personality Dimensions for Engineering Robot Personalities in Significant Quantities with Small User Groups. ROBOTICS 2022. [DOI: 10.3390/robotics11010028] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Future service robots mass-produced for practical applications may benefit from having personalities. To engineer robot personalities in significant quantities for practical applications, we need first to identify the personality dimensions on which personality traits can be effectively optimised by minimising the distances between engineering targets and the corresponding robots under construction, since not all personality dimensions are applicable and equally prominent. Whether optimisation is possible on a personality dimension depends on how specific users consider the personalities of a type of robot, especially whether they can provide effective feedback to guide the optimisation of certain traits on a personality dimension. The dimensions may vary from user group to user group since not all people consider a type of trait to be relevant to a type of robot, which our results corroborate. Therefore, we had proposed a test procedure as an engineering tool to identify, with the help of a user group, personality dimensions for engineering robot personalities out of a type of robot knowing its typical usage. It applies to robots that can imitate human behaviour and small user groups with at least eight people. We confirmed its effectiveness in limited-scope tests.
Collapse
|
9
|
Artificial Subjectivity: Personal Semantic Memory Model for Cognitive Agents. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12041903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Personal semantic memory is a way of inducing subjectivity in intelligent agents. Personal semantic memory has knowledge related to personal beliefs, self-knowledge, preferences, and perspectives in humans. Modeling this cognitive feature in the intelligent agent can help them in perception, learning, reasoning, and judgments. This paper presents a methodology for the development of personal semantic memory in response to external information. The main contribution of the work is to propose and implement the computational version of personal semantic memory. The proposed model has modules for perception, learning, sentiment analysis, knowledge representation, and personal semantic construction. These modules work in synergy for personal semantic knowledge formulation, learning, and storage. Personal semantics are added to the existing body of knowledge qualitatively and quantitatively. We performed multiple experiments where the agent had conversations with the humans. Results show an increase in personal semantic knowledge in the agent’s memory during conversations with an F1 score of 0.86. These personal semantics evolved qualitatively and quantitatively with time during experiments. Results demonstrated that agents with the given personal semantics architecture possessed personal semantics that can help the agent to produce some sort of subjectivity in the future.
Collapse
|
10
|
Human-robot collaboration: A multilevel and integrated leadership framework. THE LEADERSHIP QUARTERLY 2022. [DOI: 10.1016/j.leaqua.2021.101594] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
11
|
Funkhouser E. Evolutionary psychology, learning, and belief signaling: design for natural and artificial systems. SYNTHESE 2021; 199:14097-14119. [PMID: 34565916 PMCID: PMC8449699 DOI: 10.1007/s11229-021-03412-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 09/09/2021] [Indexed: 06/13/2023]
Abstract
Recent work in the cognitive sciences has argued that beliefs sometimes acquire signaling functions in virtue of their ability to reveal information that manipulates "mindreaders." This paper sketches some of the evolutionary and design considerations that could take agents from solipsistic goal pursuit to beliefs that serve as social signals. Such beliefs will be governed by norms besides just the traditional norms of epistemology (e.g., truth and rational support). As agents become better at detecting the agency of others, either through evolutionary history or individual learning, the candidate pool for signaling expands. This logic holds for natural and artificial agents that find themselves in recurring social situations that reward the sharing of one's thoughts.
Collapse
Affiliation(s)
- Eric Funkhouser
- Department of Philosophy, University of Arkansas, Fayetteville, AR USA
| |
Collapse
|
12
|
Shourmasti ES, Colomo-Palacios R, Holone H, Demi S. User Experience in Social Robots. SENSORS (BASEL, SWITZERLAND) 2021; 21:5052. [PMID: 34372289 PMCID: PMC8348916 DOI: 10.3390/s21155052] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/20/2021] [Accepted: 07/23/2021] [Indexed: 11/16/2022]
Abstract
Social robots are increasingly penetrating our daily lives. They are used in various domains, such as healthcare, education, business, industry, and culture. However, introducing this technology for use in conventional environments is not trivial. For users to accept social robots, a positive user experience is vital, and it should be considered as a critical part of the robots' development process. This may potentially lead to excessive use of social robots and strengthen their diffusion in society. The goal of this study is to summarize the extant literature that is focused on user experience in social robots, and to identify the challenges and benefits of UX evaluation in social robots. To achieve this goal, the authors carried out a systematic literature review that relies on PRISMA guidelines. Our findings revealed that the most common methods to evaluate UX in social robots are questionnaires and interviews. UX evaluations were found out to be beneficial in providing early feedback and consequently in handling errors at an early stage. However, despite the importance of UX in social robots, robot developers often neglect to set UX goals due to lack of knowledge or lack of time. This study emphasizes the need for robot developers to acquire the required theoretical and practical knowledge on how to perform a successful UX evaluation.
Collapse
Affiliation(s)
| | - Ricardo Colomo-Palacios
- Department of Computer Science, Østfold University College, 1783 Halden, Norway; (E.S.S.); (H.H.); (S.D.)
| | | | | |
Collapse
|
13
|
Multiple Visual Feature Integration Based Automatic Aesthetics Evaluation of Robotic Dance Motions. INFORMATION 2021. [DOI: 10.3390/info12030095] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Imitation of human behaviors is one of the effective ways to develop artificial intelligence. Human dancers, standing in front of a mirror, always achieve autonomous aesthetics evaluation on their own dance motions, which are observed from the mirror. Meanwhile, in the visual aesthetics cognition of human brains, space and shape are two important visual elements perceived from motions. Inspired by the above facts, this paper proposes a novel mechanism of automatic aesthetics evaluation of robotic dance motions based on multiple visual feature integration. In the mechanism, a video of robotic dance motion is firstly converted into several kinds of motion history images, and then a spatial feature (ripple space coding) and shape features (Zernike moment and curvature-based Fourier descriptors) are extracted from the optimized motion history images. Based on feature integration, a homogeneous ensemble classifier, which uses three different random forests, is deployed to build a machine aesthetics model, aiming to make the machine possess human aesthetic ability. The feasibility of the proposed mechanism has been verified by simulation experiments, and the experimental results show that our ensemble classifier can achieve a high correct ratio of aesthetics evaluation of 75%. The performance of our mechanism is superior to those of the existing approaches.
Collapse
|
14
|
Ellery A. Tutorial Review of Bio-Inspired Approaches to Robotic Manipulation for Space Debris Salvage. Biomimetics (Basel) 2020; 5:E19. [PMID: 32408615 PMCID: PMC7345424 DOI: 10.3390/biomimetics5020019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 04/30/2020] [Accepted: 05/01/2020] [Indexed: 11/16/2022] Open
Abstract
We present a comprehensive tutorial review that explores the application of bio-inspired approaches to robot control systems for grappling and manipulating a wide range of space debris targets. Current robot manipulator control systems exploit limited techniques which can be supplemented by additional bio-inspired methods to provide a robust suite of robot manipulation technologies. In doing so, we review bio-inspired control methods because this will be the key to enabling such capabilities. In particular, force feedback control may be supplemented with predictive forward models and software emulation of viscoelastic preflexive joint behaviour. This models human manipulation capabilities as implemented by the cerebellum and muscles/joints respectively. In effect, we are proposing a three-level control strategy based on biomimetic forward models for predictive estimation, traditional feedback control and biomimetic muscle-like preflexes. We place emphasis on bio-inspired forward modelling suggesting that all roads lead to this solution for robust and adaptive manipulator control. This promises robust and adaptive manipulation for complex tasks in salvaging space debris.
Collapse
Affiliation(s)
- Alex Ellery
- Department of Mechanical & Aerospace Engineering, Carleton University, 1125 Colonel By Drive, Ottawa ON K1S 5B6, Canada
| |
Collapse
|
15
|
Xu X, Hanganu-Opatz IL, Bieler M. Cross-Talk of Low-Level Sensory and High-Level Cognitive Processing: Development, Mechanisms, and Relevance for Cross-Modal Abilities of the Brain. Front Neurorobot 2020; 14:7. [PMID: 32116637 PMCID: PMC7034303 DOI: 10.3389/fnbot.2020.00007] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Accepted: 01/27/2020] [Indexed: 12/18/2022] Open
Abstract
The emergence of cross-modal learning capabilities requires the interaction of neural areas accounting for sensory and cognitive processing. Convergence of multiple sensory inputs is observed in low-level sensory cortices including primary somatosensory (S1), visual (V1), and auditory cortex (A1), as well as in high-level areas such as prefrontal cortex (PFC). Evidence shows that local neural activity and functional connectivity between sensory cortices participate in cross-modal processing. However, little is known about the functional interplay between neural areas underlying sensory and cognitive processing required for cross-modal learning capabilities across life. Here we review our current knowledge on the interdependence of low- and high-level cortices for the emergence of cross-modal processing in rodents. First, we summarize the mechanisms underlying the integration of multiple senses and how cross-modal processing in primary sensory cortices might be modified by top-down modulation of the PFC. Second, we examine the critical factors and developmental mechanisms that account for the interaction between neuronal networks involved in sensory and cognitive processing. Finally, we discuss the applicability and relevance of cross-modal processing for brain-inspired intelligent robotics. An in-depth understanding of the factors and mechanisms controlling cross-modal processing might inspire the refinement of robotic systems by better mimicking neural computations.
Collapse
Affiliation(s)
- Xiaxia Xu
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Ileana L Hanganu-Opatz
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Malte Bieler
- Laboratory for Neural Computation, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| |
Collapse
|
16
|
Creating a Computable Cognitive Model of Visual Aesthetics for Automatic Aesthetics Evaluation of Robotic Dance Poses. Symmetry (Basel) 2019. [DOI: 10.3390/sym12010023] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Inspired by human dancers who can evaluate the aesthetics of their own dance poses through mirror observation, this paper presents a corresponding mechanism for robots to improve their cognitive and autonomous abilities. Essentially, the proposed mechanism is a brain-like intelligent system that is symmetrical to the visual cognitive nervous system of the human brain. Specifically, a computable cognitive model of visual aesthetics is developed using the two important aesthetic cognitive neural models of the human brain, which is then applied in the automatic aesthetics evaluation of robotic dance poses. Three kinds of features (color, shape and orientation) are extracted in a manner similar to the visual feature elements extracted by human brains. After applying machine learning methods in different feature combinations, machine aesthetics models are built for automatic evaluation of robotic dance poses. The simulation results show that our approach can process visual information effectively by cognitive computation, and achieved a very good evaluation performance of automatic aesthetics.
Collapse
|
17
|
Itoh H, Ihara N, Fukumoto H, Wakuya H. A motion imitation system for humanoid robots with inference-based optimization and an auditory user interface. ARTIFICIAL LIFE AND ROBOTICS 2019. [DOI: 10.1007/s10015-019-00575-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
18
|
Biologically-Inspired Computational Neural Mechanism for Human Action/activity Recognition: A Review. ELECTRONICS 2019. [DOI: 10.3390/electronics8101169] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Theoretical neuroscience investigation shows valuable information on the mechanism for recognizing the biological movements in the mammalian visual system. This involves many different fields of researches such as psychological, neurophysiology, neuro-psychological, computer vision, and artificial intelligence (AI). The research on these areas provided massive information and plausible computational models. Here, a review on this subject is presented. This paper describes different perspective to look at this task including action perception, computational and knowledge based modeling, psychological, and neuroscience approaches.
Collapse
|
19
|
Imitation of Human Motion by Low Degree-of-Freedom Simulated Robots and Human Preference for Mappings Driven by Spinal, Arm, and Leg Activity. Int J Soc Robot 2019. [DOI: 10.1007/s12369-019-00595-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
20
|
Wortham RH, Gaudl SE, Bryson JJ. Instinct: A biologically inspired reactive planner for intelligent embedded systems. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2018.10.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
21
|
Celemin C, Maeda G, Ruiz-del-Solar J, Peters J, Kober J. Reinforcement learning of motor skills using Policy Search and human corrective advice. Int J Rob Res 2019. [DOI: 10.1177/0278364919871998] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Robot learning problems are limited by physical constraints, which make learning successful policies for complex motor skills on real systems unfeasible. Some reinforcement learning methods, like Policy Search, offer stable convergence toward locally optimal solutions, whereas interactive machine learning or learning-from-demonstration methods allow fast transfer of human knowledge to the agents. However, most methods require expert demonstrations. In this work, we propose the use of human corrective advice in the actions domain for learning motor trajectories. Additionally, we combine this human feedback with reward functions in a Policy Search learning scheme. The use of both sources of information speeds up the learning process, since the intuitive knowledge of the human teacher can be easily transferred to the agent, while the Policy Search method with the cost/reward function take over for supervising the process and reducing the influence of occasional wrong human corrections. This interactive approach has been validated for learning movement primitives with simulated arms with several degrees of freedom in reaching via-point movements, and also using real robots in such tasks as “writing characters” and the ball-in-a-cup game. Compared with standard reinforcement learning without human advice, the results show that the proposed method not only converges to higher rewards when learning movement primitives, but also that the learning is sped up by a factor of 4–40 times, depending on the task.
Collapse
Affiliation(s)
- Carlos Celemin
- Department of Electrical Engineering & Advanced Mining Technology Center, University of Chile, Chile
- Cognitive Robotics Department, Delft University of Technology, Netherlands
| | - Guilherme Maeda
- Preferred Networks, Inc., Japan
- Department of Brain Robot Interface, ATR Computational Neuroscience Lab, Japan
| | - Javier Ruiz-del-Solar
- Department of Electrical Engineering & Advanced Mining Technology Center, University of Chile, Chile
| | - Jan Peters
- Intelligent Autonomous Systems lab, Technische Universität Darmstadt, Germany
| | - Jens Kober
- Cognitive Robotics Department, Delft University of Technology, Netherlands
| |
Collapse
|
22
|
Barresi J. On building a person: benchmarks for robotic personhood. J EXP THEOR ARTIF IN 2019. [DOI: 10.1080/0952813x.2019.1653386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- John Barresi
- Department of Psychology & Neuroscience, Dalhousie University, Halifax, Canada
| |
Collapse
|
23
|
|
24
|
LaViers A. Counts of mechanical, external configurations compared to computational, internal configurations in natural and artificial systems. PLoS One 2019; 14:e0215671. [PMID: 31067278 PMCID: PMC6506145 DOI: 10.1371/journal.pone.0215671] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Accepted: 04/05/2019] [Indexed: 12/03/2022] Open
Abstract
Animal movement encodes information that is meaningfully interpreted by natural counterparts. This is a behavior that roboticists are trying to replicate in artificial systems but that is not well understood even in natural systems. This paper presents a count on the cardinality of a discretized posture space-an aspect of expressivity-of articulated platforms. The paper uses an information-theoretic measure, Shannon entropy, to create observations analogous to Moore's Law, providing a measure that complements traditional measures of the capacity of robots. This analysis, applied to a variety of natural and artificial systems, shows trends in increasing capacity in both internal and external complexity for natural systems while artificial, robotic systems have increased significantly in the capacity of computational (internal) states but remained more or less constant in mechanical (external) state capacity. The quantitative measure proposed in this paper provides an additional lens through which to compare natural and artificial systems.
Collapse
Affiliation(s)
- Amy LaViers
- Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States of America
| |
Collapse
|
25
|
Giger J, Piçarra N, Alves‐Oliveira P, Oliveira R, Arriaga P. Humanization of robots: Is it really such a good idea? HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2019. [DOI: 10.1002/hbe2.147] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Jean‐Christophe Giger
- Department of Psychology and Educational SciencesUniversity of Algarve Portugal
- Centre for Research in Psychology—CIP‐UAL Lisbon Portugal
| | | | | | - Raquel Oliveira
- ISCTE‐Instituto Universitário de Lisboa, CIS‐IUL Lisbon Portugal
- INESC‐ID Lisbon Portugal
| | - Patrícia Arriaga
- ISCTE‐Instituto Universitário de Lisboa, CIS‐IUL Lisbon Portugal
| |
Collapse
|
26
|
Siposova B, Carpenter M. A new look at joint attention and common knowledge. Cognition 2019; 189:260-274. [PMID: 31015079 DOI: 10.1016/j.cognition.2019.03.019] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 02/12/2019] [Accepted: 03/26/2019] [Indexed: 10/27/2022]
Abstract
Everyone agrees that joint attention is a key feature of human social cognition. Yet, despite over 40 years of work and hundreds of publications on this topic, there is still surprisingly little agreement on what exactly joint attention is, and how the jointness in it is achieved. Part of the problem, we propose, is that joint attention is not a single process, but rather it includes a cluster of different cognitive skills and processes, and different researchers focus on different aspects of it. A similar problem applies to common knowledge. Here we present a new approach: We outline a typology of social attention levels which are currently all referred to in the literature as joint attention (from monitoring to common, mutual, and shared attention), along with corresponding levels of common knowledge. We consider cognitive, behavioral, and phenomenological aspects of the different levels as well as their different functions, and a key distinction we make in all of this is second-personal vs. third-personal relations. While we focus mainly on joint attention and common knowledge, we also briefly discuss how these levels might apply to other 'joint' mental states such as joint goals.
Collapse
Affiliation(s)
- Barbora Siposova
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, Scotland, UK; Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany.
| | - Malinda Carpenter
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, Scotland, UK; Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| |
Collapse
|
27
|
Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning. KNOWL ENG REV 2018. [DOI: 10.1017/s0269888918000279] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractWe consider task planning for long-living intelligent agents situated in dynamic environments. Specifically, we address the problem of incomplete knowledge of the world due to the addition of new objects with unknown action models. We propose a multilayered agent architecture that uses meta-reasoning to control hierarchical task planning and situated learning, monitor expectations generated by a plan against world observations, forms goals and rewards for the situated reinforcement learner, and learns the missing planning knowledge relevant to the new objects. We use occupancy grids as a low-level representation for the high-level expectations to capture changes in the physical world due to the additional objects, and provide a similarity method for detecting discrepancies between the expectations and the observations at run-time; the meta-reasoner uses these discrepancies to formulate goals and rewards for the learner, and the learned policies are added to the hierarchical task network plan library for future re-use. We describe our experiments in the Minecraft and Gazebo microworlds to demonstrate the efficacy of the architecture and the technique for learning. We test our approach against an ablated reinforcement learning (RL) version, and our results indicate this form of expectation enhances the learning curve for RL while being more generic than propositional representations.
Collapse
|
28
|
Takano W, Takahashi T, Nakamura Y. Sequential Monte Carlo controller that integrates physical consistency and motion knowledge. Auton Robots 2018. [DOI: 10.1007/s10514-018-9815-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
29
|
Mutlu B, Duff M, Turkstra L. Social-cue perception and mentalizing ability following traumatic brain injury: A human-robot interaction study. Brain Inj 2018; 33:23-31. [PMID: 30336070 DOI: 10.1080/02699052.2018.1531305] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
PRIMARY OBJECTIVE Research studies and clinical observations of individuals with traumatic brain injury (TBI) indicate marked deficits in mentalizing-perceiving social information and integrating it into judgements about the affective and mental states of others. The current study investigates social-cognitive mechanisms that underlie mentalizing ability to advance our understanding of social consequences of TBI and inform the development of more effective clinical interventions. RESEARCH DESIGN The study followed a mixed-design experiment, manipulating the presence of a mentalizing gaze cue across trials and participant population (TBI vs. healthy comparisons). METHODS AND PROCEDURES Participants, 153 adults, 74 with moderate-severe TBI and 79 demographically matched healthy comparison peers, were asked to judge a humanoid robot's mental state based on precisely controlled gaze cues presented by the robot and apply those judgements to respond accurately on the experimental task. MAIN OUTCOMES AND RESULTS Results showed that, contrary to our hypothesis, the social cues improved task performance in the TBI group but not the healthy comparison group. CONCLUSIONS Results provide evidence that, in specific contexts, individuals with TBI can perceive, correctly recognize, and integrate dynamic gaze cues and motivate further research to understand why this ability may not translate to day-to-day social interactions.
Collapse
Affiliation(s)
- Bilge Mutlu
- a Department of Computer Sciences , University of Wisconsin-Madison , Madison , WI , USA
| | - Melissa Duff
- b Department of Hearing and Speech Science , Vanderbilt University Medical Center , Nashville , TN , USA
| | - Lyn Turkstra
- c School of Rehabilitation Science , McMaster University , Hamilton , Ontario , Canada
| |
Collapse
|
30
|
Park JC, Kim DS, Nagai Y. Learning for Goal-Directed Actions Using RNNPB: Developmental Change of “What to Imitate”. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2679765] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
31
|
Sandini G, Mohan V, Sciutti A, Morasso P. Social Cognition for Human-Robot Symbiosis-Challenges and Building Blocks. Front Neurorobot 2018; 12:34. [PMID: 30050425 PMCID: PMC6051162 DOI: 10.3389/fnbot.2018.00034] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 06/11/2018] [Indexed: 11/22/2022] Open
Abstract
The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a “positronic” replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the “services” of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework.
Collapse
Affiliation(s)
- Giulio Sandini
- Research Unit of Robotics, Brain, and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
| | - Vishwanathan Mohan
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Alessandra Sciutti
- Research Unit of Robotics, Brain, and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
| | - Pietro Morasso
- Research Unit of Robotics, Brain, and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
32
|
Abstract
Learning from demonstration (LfD) has been used to help robots to implement manipulation tasks autonomously, in particular, to learn manipulation behaviors from observing the motion executed by human demonstrators. This paper reviews recent research and development in the field of LfD. The main focus is placed on how to demonstrate the example behaviors to the robot in assembly operations, and how to extract the manipulation features for robot learning and generating imitative behaviors. Diverse metrics are analyzed to evaluate the performance of robot imitation learning. Specifically, the application of LfD in robotic assembly is a focal point in this paper.
Collapse
|
33
|
|
34
|
|
35
|
Social-motor experience and perception-action learning bring efficiency to machines. Behav Brain Sci 2017; 40:e273. [DOI: 10.1017/s0140525x1700022x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractLake et al. proposed a way to build machines that learn as fast as people do. This can be possible only if machines follow the human processes: the perception-action loop. People perceive and act to understand new objects or to promote specific behavior to their partners. In return, the object/person provides information that induces another reaction, and so on.
Collapse
|
36
|
Gomez C, Hernandez AC, Crespo J, Barber R. A topological navigation system for indoor environments based on perception events. INT J ADV ROBOT SYST 2016. [DOI: 10.1177/1729881416678134] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The aim of the work presented in this article is to develop a navigation system that allows a mobile robot to move autonomously in an indoor environment using perceptions of multiple events. A topological navigation system based on events that imitates human navigation using sensorimotor abilities and sensorial events is presented. The increasing interest in building autonomous mobile systems makes the detection and recognition of perceptions a crucial task. The system proposed can be considered a perceptive navigation system as the navigation process is based on perception and recognition of natural and artificial landmarks, among others. The innovation of this work resides in the use of an integration interface to handle multiple events concurrently, leading to a more complete and advanced navigation system. The developed architecture enhances the integration of new elements due to its modularity and the decoupling between modules. Finally, experiments have been carried out in several mobile robots, and their results show the feasibility of the navigation system proposed and the effectiveness of the sensorial data integration managed as events.
Collapse
Affiliation(s)
- Clara Gomez
- Systems Engineering and Automation Department, Universidad Carlos III de Madrid, Leganés, Spain
| | | | - Jonathan Crespo
- Systems Engineering and Automation Department, Universidad Carlos III de Madrid, Leganés, Spain
| | - Ramon Barber
- Systems Engineering and Automation Department, Universidad Carlos III de Madrid, Leganés, Spain
| |
Collapse
|
37
|
Oudeyer PY. What do we learn about development from baby robots? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2016; 8. [PMID: 27906505 DOI: 10.1002/wcs.1395] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Revised: 03/25/2016] [Accepted: 04/06/2016] [Indexed: 12/21/2022]
Abstract
Understanding infant development is one of the great scientific challenges of contemporary science. In addressing this challenge, robots have proven useful as they allow experimenters to model the developing brain and body and understand the processes by which new patterns emerge in sensorimotor, cognitive, and social domains. Robotics also complements traditional experimental methods in psychology and neuroscience, where only a few variables can be studied at the same time. Moreover, work with robots has enabled researchers to systematically explore the role of the body in shaping the development of skill. All told, this work has shed new light on development as a complex dynamical system. WIREs Cogn Sci 2017, 8:e1395. doi: 10.1002/wcs.1395 For further resources related to this article, please visit the WIREs website.
Collapse
|
38
|
Haladjian HH, Montemayor C. Artificial consciousness and the consciousness-attention dissociation. Conscious Cogn 2016; 45:210-225. [PMID: 27656787 DOI: 10.1016/j.concog.2016.08.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2016] [Accepted: 08/12/2016] [Indexed: 01/02/2023]
Abstract
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems-these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness.
Collapse
Affiliation(s)
- Harry Haroutioun Haladjian
- Laboratoire Psychologie de la Perception, CNRS (UMR 8242), Université Paris Descartes, Centre Biomédical des Saints-Pères, 45 rue des Saints-Pères, 75006 Paris, France.
| | - Carlos Montemayor
- San Francisco State University, Philosophy Department, 1600 Holloway Avenue, San Francisco, CA 94132 USA.
| |
Collapse
|
39
|
Takano W, Kusajima I, Nakamura Y. Generating action descriptions from statistically integrated representations of human motions and sentences. Neural Netw 2016; 80:1-8. [PMID: 27138360 DOI: 10.1016/j.neunet.2016.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Revised: 11/21/2015] [Accepted: 03/03/2016] [Indexed: 10/22/2022]
Abstract
It is desirable for robots to be able to linguistically understand human actions during human-robot interactions. Previous research has developed frameworks for encoding human full body motion into model parameters and for classifying motion into specific categories. For full understanding, the motion categories need to be connected to the natural language such that the robots can interpret human motions as linguistic expressions. This paper proposes a novel framework for integrating observation of human motion with that of natural language. This framework consists of two models; the first model statistically learns the relations between motions and their relevant words, and the second statistically learns sentence structures as word n-grams. Integration of these two models allows robots to generate sentences from human motions by searching for words relevant to the motion using the first model and then arranging these words in appropriate order using the second model. This allows making sentences that are the most likely to be generated from the motion. The proposed framework was tested on human full body motion measured by an optical motion capture system. In this, descriptive sentences were manually attached to the motions, and the validity of the system was demonstrated.
Collapse
Affiliation(s)
- Wataru Takano
- Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyoku, Tokyo, 113-8656, Japan.
| | - Ikuo Kusajima
- Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyoku, Tokyo, 113-8656, Japan.
| | - Yoshihiko Nakamura
- Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyoku, Tokyo, 113-8656, Japan.
| |
Collapse
|
40
|
Incremental statistical learning for integrating motion primitives and language in humanoid robots. Auton Robots 2016. [DOI: 10.1007/s10514-015-9486-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
41
|
|
42
|
Haghighi H, Abdollahi F, Gharibzadeh S. Brain-inspired self-organizing modular structure to control human-like movements based on primitive motion identification. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.09.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
43
|
Chung MJY, Friesen AL, Fox D, Meltzoff AN, Rao RPN. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning. PLoS One 2015; 10:e0141965. [PMID: 26536366 PMCID: PMC4633237 DOI: 10.1371/journal.pone.0141965] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2015] [Accepted: 10/15/2015] [Indexed: 11/18/2022] Open
Abstract
A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.
Collapse
Affiliation(s)
- Michael Jae-Yoon Chung
- Department of Computer Science & Engineering, University of Washington, Seattle, WA, United States of America
| | - Abram L. Friesen
- Department of Computer Science & Engineering, University of Washington, Seattle, WA, United States of America
| | - Dieter Fox
- Department of Computer Science & Engineering, University of Washington, Seattle, WA, United States of America
| | - Andrew N. Meltzoff
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States of America
| | - Rajesh P. N. Rao
- Department of Computer Science & Engineering, University of Washington, Seattle, WA, United States of America
- * E-mail:
| |
Collapse
|
44
|
Deniša M, Ude A. Synthesis of New Dynamic Movement Primitives Through Search in a Hierarchical Database of Example Movements. INT J ADV ROBOT SYST 2015. [DOI: 10.5772/61036] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
This paper presents a novel approach to discovering motor primitives in a hierarchical database of example trajectories. An initial set of example trajectories is obtained by human demonstration. The trajectories are clustered and organized in a binary tree-like hierarchical structure, from which transition graphs at different levels of granularity are constructed. A novel procedure for searching in this hierarchical structure is presented. It can exploit the interdependencies between movements and can discover new series of partial paths. From these partial paths, complete new movements are generated by encoding them as dynamic movement primitives. In this way, the number of example trajectories that must be acquired with the assistance of a human teacher can be reduced. By combining the results of the hierarchical search with statistical generalization techniques, a complete representation of new, not directly demonstrated, movement primitives can be generated.
Collapse
Affiliation(s)
- Miha Deniša
- Jomžef Stefan Institute, Ljubljana, Slovenia
| | - Aleš Ude
- Jomžef Stefan Institute, Ljubljana, Slovenia
| |
Collapse
|
45
|
A Survey of Autonomous Human Affect Detection Methods for Social Robots Engaged in Natural HRI. J INTELL ROBOT SYST 2015. [DOI: 10.1007/s10846-015-0259-2] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
46
|
Waytz A, Cacioppo J, Epley N. Who Sees Human? The Stability and Importance of Individual Differences in Anthropomorphism. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2015; 5:219-32. [PMID: 24839457 DOI: 10.1177/1745691610369336] [Citation(s) in RCA: 255] [Impact Index Per Article: 28.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Anthropomorphism is a far-reaching phenomenon that incorporates ideas from social psychology, cognitive psychology, developmental psychology, and the neurosciences. Although commonly considered to be a relatively universal phenomenon with only limited importance in modern industrialized societies-more cute than critical-our research suggests precisely the opposite. In particular, we provide a measure of stable individual differences in anthropomorphism that predicts three important consequences for everyday life. This research demonstrates that individual differences in anthropomorphism predict the degree of moral care and concern afforded to an agent, the amount of responsibility and trust placed on an agent, and the extent to which an agent serves as a source of social influence on the self. These consequences have implications for disciplines outside of psychology including human-computer interaction, business (marketing and finance), and law. Concluding discussion addresses how understanding anthropomorphism not only informs the burgeoning study of nonpersons, but how it informs classic issues underlying person perception as well.
Collapse
Affiliation(s)
- Adam Waytz
- Department of Psychology, Harvard University, Cambridge, MA
| | | | | |
Collapse
|
47
|
Abstract
In recent years researchers have begun to investigate how the perceptual, motor and cognitive activities of two or more individuals become organized into coordinated action. In the first part of this introduction we identify three common threads among the ten papers of this special issue that exemplify this new line of research. First, all of the papers are grounded in the experimental study of online interactions between two or more individuals. Second, albeit at different levels of analysis, the contributions focus on the mechanisms supporting joint action. Third, many of the papers investigate empirically the pre-requisites for the highly sophisticated forms of joint action that are typical of humans. In the second part of the introduction, we summarize each of the papers, highlighting more specific connections among them.
Collapse
Affiliation(s)
- Bruno Galantucci
- Yeshiva University and Haskins LaboratoriesRadboud University Nijmegen
| | | |
Collapse
|
48
|
Vredenburgh C, Kushnir T. Young Children's Help-Seeking as Active Information Gathering. Cogn Sci 2015; 40:697-722. [PMID: 25916349 DOI: 10.1111/cogs.12245] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2014] [Revised: 01/12/2015] [Accepted: 01/16/2015] [Indexed: 12/01/2022]
Abstract
Young children's social learning is a topic of great interest. Here, we examined preschoolers' (M = 52.44 months, SD = 9.7 months) help-seeking as a social information gathering activity that may optimize and support children's opportunities for learning. In a toy assembly task, we assessed each child's competency at assembling toys and the difficulty of each step of the task. We hypothesized that children's help-seeking would be a function of both initial competency and task difficulty. The results confirmed this prediction; all children were more likely to seek assistance on difficult steps and less competent children sought assistance more often. Moreover, the magnitude of the help-seeking requests (from asking for verbal confirmation to asking the adult to take over the task) similarly related to both competency and difficulty. The results provide support for viewing children's help-seeking as an information gathering activity, indicating that preschoolers flexibly adjust the level and amount of assistance to optimize their opportunities for learning.
Collapse
|
49
|
Takano W, Nakamura Y. Construction of a space of motion labels from their mapping to full-body motion symbols. Adv Robot 2015. [DOI: 10.1080/01691864.2014.985611] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
50
|
Murata S, Arie H, Ogata T, Sugano S, Tani J. Learning to generate proactive and reactive behavior using a dynamic neural network model with time-varying variance prediction mechanism. Adv Robot 2014. [DOI: 10.1080/01691864.2014.916628] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|