1
|
Artificial cognition vs. artificial intelligence for next-generation autonomous robotic agents. Front Comput Neurosci 2024; 18:1349408. [PMID: 38585280 PMCID: PMC10995397 DOI: 10.3389/fncom.2024.1349408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 02/20/2024] [Indexed: 04/09/2024] Open
Abstract
The trend in industrial/service robotics is to develop robots that can cooperate with people, interacting with them in an autonomous, safe and purposive way. These are the fundamental elements characterizing the fourth and the fifth industrial revolutions (4IR, 5IR): the crucial innovation is the adoption of intelligent technologies that can allow the development of cyber-physical systems, similar if not superior to humans. The common wisdom is that intelligence might be provided by AI (Artificial Intelligence), a claim that is supported more by media coverage and commercial interests than by solid scientific evidence. AI is currently conceived in a quite broad sense, encompassing LLMs and a lot of other things, without any unifying principle, but self-motivating for the success in various areas. The current view of AI robotics mostly follows a purely disembodied approach that is consistent with the old-fashioned, Cartesian mind-body dualism, reflected in the software-hardware distinction inherent to the von Neumann computing architecture. The working hypothesis of this position paper is that the road to the next generation of autonomous robotic agents with cognitive capabilities requires a fully brain-inspired, embodied cognitive approach that avoids the trap of mind-body dualism and aims at the full integration of Bodyware and Cogniware. We name this approach Artificial Cognition (ACo) and ground it in Cognitive Neuroscience. It is specifically focused on proactive knowledge acquisition based on bidirectional human-robot interaction: the practical advantage is to enhance generalization and explainability. Moreover, we believe that a brain-inspired network of interactions is necessary for allowing humans to cooperate with artificial cognitive agents, building a growing level of personal trust and reciprocal accountability: this is clearly missing, although actively sought, in current AI. The ACo approach is a work in progress that can take advantage of a number of research threads, some of them antecedent the early attempts to define AI concepts and methods. In the rest of the paper we will consider some of the building blocks that need to be re-visited in a unitary framework: the principles of developmental robotics, the methods of action representation with prospection capabilities, and the crucial role of social interaction.
Collapse
|
2
|
Intrinsic motivation learning for real robot applications. Front Robot AI 2023; 10:1102438. [PMID: 36845331 PMCID: PMC9950409 DOI: 10.3389/frobt.2023.1102438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 01/13/2023] [Indexed: 02/12/2023] Open
|
3
|
A Perspective on Lifelong Open-Ended Learning Autonomy for Robotics through Cognitive Architectures. SENSORS (BASEL, SWITZERLAND) 2023; 23:1611. [PMID: 36772651 PMCID: PMC9920408 DOI: 10.3390/s23031611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 01/26/2023] [Accepted: 01/29/2023] [Indexed: 06/18/2023]
Abstract
This paper addresses the problem of achieving lifelong open-ended learning autonomy in robotics, and how different cognitive architectures provide functionalities that support it. To this end, we analyze a set of well-known cognitive architectures in the literature considering the different components they address and how they implement them. Among the main functionalities that are taken as relevant for lifelong open-ended learning autonomy are the fact that architectures must contemplate learning, and the availability of contextual memory systems, motivations or attention. Additionally, we try to establish which of them were actually applied to real robot scenarios. It transpires that in their current form, none of them are completely ready to address this challenge, but some of them do provide some indications on the paths to follow in some of the aspects they contemplate. It can be gleaned that for lifelong open-ended learning autonomy, motivational systems that allow finding domain-dependent goals from general internal drives, contextual long-term memory systems that all allow for associative learning and retrieval of knowledge, and robust learning systems would be the main components required. Nevertheless, other components, such as attention mechanisms or representation management systems, would greatly facilitate operation in complex domains.
Collapse
|
4
|
Editorial: Novel methods in embodied and enactive AI and cognition. Front Neurorobot 2023; 17:1162568. [PMID: 36960196 PMCID: PMC10029724 DOI: 10.3389/fnbot.2023.1162568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 02/15/2023] [Indexed: 03/09/2023] Open
|
5
|
A brain-inspired robot pain model based on a spiking neural network. Front Neurorobot 2022; 16:1025338. [PMID: 36605522 PMCID: PMC9807619 DOI: 10.3389/fnbot.2022.1025338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Introduction Pain is a crucial function for organisms. Building a "Robot Pain" model inspired by organisms' pain could help the robot learn self-preservation and extend longevity. Most previous studies about robots and pain focus on robots interacting with people by recognizing their pain expressions or scenes, or avoiding obstacles by recognizing dangerous objects. Robots do not have human-like pain capacity and cannot adaptively respond to danger. Inspired by the evolutionary mechanisms of pain emergence and the Free Energy Principle (FEP) in the brain, we summarize the neural mechanisms of pain and construct a Brain-inspired Robot Pain Spiking Neural Network (BRP-SNN) with spike-time-dependent-plasticity (STDP) learning rule and population coding method. Methods The proposed model can quantify machine injury by detecting the coupling relationship between multi-modality sensory information and generating "robot pain" as an internal state. Results We provide a comparative analysis with the results of neuroscience experiments, showing that our model has biological interpretability. We also successfully tested our model on two tasks with real robots-the alerting actual injury task and the preventing potential injury task. Discussion Our work has two major contributions: (1) It has positive implications for the integration of pain concepts into robotics in the intelligent robotics field. (2) Our summary of pain's neural mechanisms and the implemented computational simulations provide a new perspective to explore the nature of pain, which has significant value for future pain research in the cognitive neuroscience field.
Collapse
|
6
|
Editorial: Constructive approach to spatial cognition in intelligent robotics. Front Neurorobot 2022; 16:1077891. [DOI: 10.3389/fnbot.2022.1077891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 11/02/2022] [Indexed: 11/16/2022] Open
|
7
|
Vision-Action Semantic Associative Learning Based on Spiking Neural Networks for Cognitive Robot. IEEE COMPUT INTELL M 2022. [DOI: 10.1109/mci.2022.3199623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
A Developmental Evolutionary Learning Framework for Robotic Chinese Stroke Writing. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3098229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
9
|
EMRES: A New EMotional RESpondent Robot. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3120562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
10
|
Abstract
AbstractThe false attribution of autonomy and related concepts to artificial agents that lack the attributed levels of the respective characteristic is problematic in many ways. In this article, we contrast this view with a positive viewpoint that emphasizes the potential role of such false attributions in the context of robotic language acquisition. By adding emotional displays and congruent body behaviors to a child-like humanoid robot’s behavioral repertoire, we were able to bring naïve human tutors to engage in so called intent interpretations. In developmental psychology, intent interpretations can be hypothesized to play a central role in the acquisition of emotion, volition, and similar autonomy-related words. The aforementioned experiments originally targeted the acquisition of linguistic negation. However, participants produced other affect- and motivation-related words with high frequencies too and, as a consequence, these entered the robot’s active vocabulary. We will analyze participants’ non-negative emotional and volitional speech and contrast it with participants’ speech in a non-affective baseline scenario. Implications of these findings for robotic language acquisition in particular and artificial intelligence and robotics more generally will also be discussed.
Collapse
|
11
|
A Schema-Based Robot Controller Complying With the Constraints of Biological Systems. Front Neurorobot 2022; 16:836767. [PMID: 35615342 PMCID: PMC9124795 DOI: 10.3389/fnbot.2022.836767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 04/05/2022] [Indexed: 11/25/2022] Open
Abstract
This article reports on the early stages of conception of a robotic control system based on Piaget's schemas theory. Beyond some initial experimental results, we question the scientific method used in developmental robotics (DevRob) and argue that it is premature to abstract away the functional architecture of the brain when so little is known about its mechanisms. Instead, we advocate for applying a method similar to the method used in model-based cognitive science, which consists in selecting plausible models using computational and physiological constraints. Previous study on schema-based robotics is analyzed through the critical lens of the proposed method, and a minimal system designed using this method is presented.
Collapse
|
12
|
Infants Learn to Follow Gaze in Stages: Evidence Confirming a Robotic Prediction. OPEN MIND 2022; 5:174-188. [PMID: 35024530 PMCID: PMC8746125 DOI: 10.1162/opmi_a_00049] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/27/2021] [Indexed: 11/24/2022] Open
Abstract
Gaze following is an early-emerging skill in infancy argued to be fundamental to joint attention and later language development. However, how gaze following emerges is a topic of great debate. Representational theories assume that in order to follow adults’ gaze, infants must have a rich sensitivity to adults’ communicative intention from birth. In contrast, learning-based theories hold that infants may learn to gaze follow based on low-level social reinforcement, without the need to understand others’ mental states. Nagai et al. (2006) successfully taught a robot to gaze follow through social reinforcement and found that the robot learned in stages: first in the horizontal plane, and later in the vertical plane—a prediction that does not follow from representational theories. In the current study, we tested this prediction in an eye-tracking paradigm. Six-month-olds did not follow gaze in either the horizontal or vertical plane, whereas 12-month-olds and 18-month-olds only followed gaze in the horizontal plane. These results confirm the core prediction of the robot model, suggesting that children may also learn to gaze follow through social reinforcement coupled with a structured learning environment.
Collapse
|
13
|
|
14
|
Building and Understanding the Minimal Self. Front Psychol 2021; 12:716982. [PMID: 34899463 PMCID: PMC8660690 DOI: 10.3389/fpsyg.2021.716982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Accepted: 10/26/2021] [Indexed: 11/13/2022] Open
Abstract
Within the methodologically diverse interdisciplinary research on the minimal self, we identify two movements with seemingly disparate research agendas - cognitive science and cognitive (developmental) robotics. Cognitive science, on the one hand, devises rather abstract models which can predict and explain human experimental data related to the minimal self. Incorporating the established models of cognitive science and ideas from artificial intelligence, cognitive robotics, on the other hand, aims to build embodied learning machines capable of developing a self "from scratch" similar to human infants. The epistemic promise of the latter approach is that, at some point, robotic models can serve as a testbed for directly investigating the mechanisms that lead to the emergence of the minimal self. While both approaches can be productive for creating causal mechanistic models of the minimal self, we argue that building a minimal self is different from understanding the human minimal self. Thus, one should be cautious when drawing conclusions about the human minimal self based on robotic model implementations and vice versa. We further point out that incorporating constraints arising from different levels of analysis will be crucial for creating models that can predict, generate, and causally explain behavior in the real world.
Collapse
|
15
|
Global entrainment in the brain-body-environment: retrospective and prospective views. BIOLOGICAL CYBERNETICS 2021; 115:431-438. [PMID: 34633537 DOI: 10.1007/s00422-021-00898-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 09/25/2021] [Indexed: 05/21/2023]
Abstract
We celebrate the 60th anniversary of Biological Cybernetics. It has also been 30 years since "Self-organized control of bipedal locomotion by neural oscillators in unpredictable environment" was published in Biological Cybernetics (Taga et al. in Biol Cybern 65(3):147-159, 1991). I would like to look back on the creation of this paper and discuss its subsequent development and future perspectives.
Collapse
|
16
|
Motivational engine and long-term memory coupling within a cognitive architecture for lifelong open-ended learning. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2019.10.124] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
17
|
Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates? SCIENCE AND ENGINEERING ETHICS 2021; 27:42. [PMID: 34189623 PMCID: PMC8241637 DOI: 10.1007/s11948-021-00318-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 06/02/2021] [Indexed: 06/13/2023]
Abstract
Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.
Collapse
|
18
|
Computational models of the "active self" and its disturbances in schizophrenia. Conscious Cogn 2021; 93:103155. [PMID: 34130210 DOI: 10.1016/j.concog.2021.103155] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/14/2021] [Accepted: 05/20/2021] [Indexed: 11/16/2022]
Abstract
The notion that self-disorders are at the root of the emergence of schizophrenia rather than a symptom of the disease, is getting more traction in the cognitive sciences. This is in line with philosophical approaches that consider an enactive self, constituted through action and interaction with the environment. We thereby analyze different definitions of the self and evaluate various computational theories lending to these ideas. Bayesian and predictive processing are promising approaches for computational modeling of the "active self". We evaluate their implementation and challenges in computational psychiatry and cognitive developmental robotics. We describe how and why embodied robotic systems provide a valuable tool in psychiatry to assess, validate, and simulate mechanisms of self-disorders. Specifically, mechanisms involving sensorimotor learning, prediction, and self-other distinction, can be assessed with artificial agents. This link can provide essential insights to the formation of the self and new avenues in the treatment of psychiatric disorders.
Collapse
|
19
|
Effect Regulated Projection of Robot’s Action Space for Production and Prediction of Manipulation Primitives Through Learning Progress and Predictability-Based Exploration. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2019.2933900] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
20
|
Neurorobotic Models of Neurological Disorders: A Mini Review. Front Neurorobot 2021; 15:634045. [PMID: 33828474 PMCID: PMC8020031 DOI: 10.3389/fnbot.2021.634045] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 02/23/2021] [Indexed: 01/07/2023] Open
Abstract
Modeling is widely used in biomedical research to gain insights into pathophysiology and treatment of neurological disorders but existing models, such as animal models and computational models, are limited in generalizability to humans and are restricted in the scope of possible experiments. Robotics offers a potential complementary modeling platform, with advantages such as embodiment and physical environmental interaction yet with easily monitored and adjustable parameters. In this review, we discuss the different types of models used in biomedical research and summarize the existing neurorobotics models of neurological disorders. We detail the pertinent findings of these robot models which would not have been possible through other modeling platforms. We also highlight the existing limitations in a wider uptake of robot models for neurological disorders and suggest future directions for the field.
Collapse
|
21
|
A Database for Learning Numbers by Visual Finger Recognition in Developmental Neuro-Robotics. Front Neurorobot 2021; 15:619504. [PMID: 33737873 PMCID: PMC7960766 DOI: 10.3389/fnbot.2021.619504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 02/01/2021] [Indexed: 11/13/2022] Open
Abstract
Numerical cognition is a fundamental component of human intelligence that has not been fully understood yet. Indeed, it is a subject of research in many disciplines, e.g., neuroscience, education, cognitive and developmental psychology, philosophy of mathematics, linguistics. In Artificial Intelligence, aspects of numerical cognition have been modelled through neural networks to replicate and analytically study children behaviours. However, artificial models need to incorporate realistic sensory-motor information from the body to fully mimic the children's learning behaviours, e.g., the use of fingers to learn and manipulate numbers. To this end, this article presents a database of images, focused on number representation with fingers using both human and robot hands, which can constitute the base for building new realistic models of numerical cognition in humanoid robots, enabling a grounded learning approach in developmental autonomous agents. The article provides a benchmark analysis of the datasets in the database that are used to train, validate, and test five state-of-the art deep neural networks, which are compared for classification accuracy together with an analysis of the computational requirements of each network. The discussion highlights the trade-off between speed and precision in the detection, which is required for realistic applications in robotics.
Collapse
|
22
|
Goal-Directed Exploration for Learning Vowels and Syllables: A Computational Model of Speech Acquisition. KUNSTLICHE INTELLIGENZ 2021. [DOI: 10.1007/s13218-021-00704-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractInfants learn to speak rapidly during their first years of life, gradually improving from simple vowel-like sounds to larger consonant-vowel complexes. Learning to control their vocal tract in order to produce meaningful speech sounds is a complex process which requires to learn the relationship between motor and sensory processes. In this paper, a computational framework is proposed that models the problem of learning articulatory control for a physiologically plausible 3-D vocal tract model using a developmentally-inspired approach. The system babbles and explores efficiently in a low-dimensional space of goals that are relevant to the learner in its synthetic environment. The learning process is goal-directed and self-organized, and yields an inverse model of the mapping between sensory space and motor commands. This study provides a unified framework that can be used for learning static as well as dynamic motor representations. The successful learning of vowel and syllable sounds as well as the benefit of active and adaptive learning strategies are demonstrated. Categorical perception is found in the acquired models, suggesting that the framework has the potential to replicate phenomena of human speech acquisition.
Collapse
|
23
|
Prediction Error-Driven Memory Consolidation for Continual Learning: On the Case of Adaptive Greenhouse Models. KUNSTLICHE INTELLIGENZ 2021. [DOI: 10.1007/s13218-020-00700-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractThis work presents an adaptive architecture that performs online learning and faces catastrophic forgetting issues by means of an episodic memory system and of prediction-error driven memory consolidation. In line with evidence from brain sciences, memories are retained depending on their congruence with the prior knowledge stored in the system. In this work, congruence is estimated in terms of prediction error resulting from a deep neural model. The proposed AI system is transferred onto an innovative application in the horticulture industry: the learning and transfer of greenhouse models. This work presents models trained on data recorded from research facilities and transferred to a production greenhouse.
Collapse
|
24
|
Intrinsically Motivated Open-Ended Multi-Task Learning Using Transfer Learning to Discover Task Hierarchy. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11030975] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In open-ended continuous environments, robots need to learn multiple parameterised control tasks in hierarchical reinforcement learning. We hypothesise that the most complex tasks can be learned more easily by transferring knowledge from simpler tasks, and faster by adapting the complexity of the actions to the task. We propose a task-oriented representation of complex actions, called procedures, to learn online task relationships and unbounded sequences of action primitives to control the different observables of the environment. Combining both goal-babbling with imitation learning, and active learning with transfer of knowledge based on intrinsic motivation, our algorithm self-organises its learning process. It chooses at any given time a task to focus on; and what, how, when and from whom to transfer knowledge. We show with a simulation and a real industrial robot arm, in cross-task and cross-learner transfer settings, that task composition is key to tackle highly complex tasks. Task decomposition is also efficiently transferred across different embodied learners and by active imitation, where the robot requests just a small amount of demonstrations and the adequate type of information. The robot learns and exploits task dependencies so as to learn tasks of every complexity.
Collapse
|
25
|
Robot in the Mirror: Toward an Embodied Computational Model of Mirror Self-Recognition. KUNSTLICHE INTELLIGENZ 2021. [DOI: 10.1007/s13218-020-00701-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
26
|
Abstract
Robots are currently the center of attention in various fields of research because of their potential use as assistants for daily living. In this article, I highlight a different role that robots can play—being a tool for understanding human cognition. I provide examples in which robots have been used in experimental psychology to study sociocognitive mechanisms such as joint attention and sense of agency. I also discuss the issue of whether and when robots (especially those that resemble humans) are perceived through a human-centered lens with anthropomorphic attributions. In the final section, I describe approaches in which the robots’ embodiment has been used for the implementation of computational models of human cognition. In sum, the collection of studies presented here shows that robots can be an extremely useful tool for scientific inquiry in the areas of experimental psychology and cognitive science.
Collapse
|
27
|
A Hybrid Human-Neurorobotics Approach to Primary Intersubjectivity via Active Inference. Front Psychol 2020; 11:584869. [PMID: 33335499 PMCID: PMC7736637 DOI: 10.3389/fpsyg.2020.584869] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Accepted: 10/27/2020] [Indexed: 11/29/2022] Open
Abstract
Interdisciplinary efforts from developmental psychology, phenomenology, and philosophy of mind, have studied the rudiments of social cognition and conceptualized distinct forms of intersubjective communication and interaction at human early life. Interaction theorists consider primary intersubjectivity a non-mentalist, pre-theoretical, non-conceptual sort of processes that ground a certain level of communication and understanding, and provide support to higher-level cognitive skills. We argue the study of human/neurorobot interaction consists in a unique opportunity to deepen understanding of underlying mechanisms in social cognition through synthetic modeling, while allowing to examine a second person experiential (2PP) access to intersubjectivity in embodied dyadic interaction. Concretely, we propose the study of primary intersubjectivity as a 2PP experience characterized by predictive engagement, where perception, cognition, and action are accounted for an hermeneutic circle in dyadic interaction. From our interpretation of the concept of active inference in free-energy principle theory, we propose an open-source methodology named neural robotics library (NRL) for experimental human/neurorobot interaction, wherein a demonstration program named virtual Cartesian robot (VCBot) provides an opportunity to experience the aforementioned embodied interaction to general audiences. Lastly, through a study case, we discuss some ways human-robot primary intersubjectivity can contribute to cognitive science research, such as to the fields of developmental psychology, educational technology, and cognitive rehabilitation.
Collapse
|
28
|
Personalizing Human-Agent Interaction Through Cognitive Models. Front Psychol 2020; 11:561510. [PMID: 33071887 PMCID: PMC7541964 DOI: 10.3389/fpsyg.2020.561510] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 08/14/2020] [Indexed: 11/13/2022] Open
Abstract
Cognitive modeling of human behavior has advanced the understanding of underlying processes in several domains of psychology and cognitive science. In this article, we outline how we expect cognitive modeling to improve comprehension of individual cognitive processes in human-agent interaction and, particularly, human-robot interaction (HRI). We argue that cognitive models offer advantages compared to data-analytical models, specifically for research questions with expressed interest in theories of cognitive functions. However, the implementation of cognitive models is arguably more complex than common statistical procedures. Additionally, cognitive modeling paradigms typically have an explicit commitment to an underlying computational theory. We propose a conceptual framework for designing cognitive models that aims to identify whether the use of cognitive modeling is applicable to a given research question. The framework consists of five external and internal aspects related to the modeling process: research question, level of analysis, modeling paradigms, computational properties, and iterative model development. In addition to deriving our framework from a concise literature analysis, we discuss challenges and potentials of cognitive modeling. We expect cognitive models to leverage personalized human behavior prediction, agent behavior generation, and interaction pretraining as well as adaptation, which we outline with application examples from personalized HRI.
Collapse
|
29
|
|
30
|
Geometric Affordance Perception: Leveraging Deep 3D Saliency With the Interaction Tensor. Front Neurorobot 2020; 14:45. [PMID: 32733228 PMCID: PMC7359196 DOI: 10.3389/fnbot.2020.00045] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Accepted: 06/02/2020] [Indexed: 11/13/2022] Open
Abstract
Agents that need to act on their surroundings can significantly benefit from the perception of their interaction possibilities or affordances. In this paper we combine the benefits of the Interaction Tensor, a straight-forward geometrical representation that captures multiple object-scene interactions, with deep learning saliency for fast parsing of affordances in the environment. Our approach works with visually perceived 3D pointclouds and enables to query a 3D scene for locations that support affordances such as sitting or riding, as well as interactions for everyday objects like the where to hang an umbrella or place a mug. Crucially, the nature of the interaction description exhibits one-shot generalization. Experiments with numerous synthetic and real RGB-D scenes and validated by human subjects, show that the representation enables the prediction of affordance candidate locations in novel environments from a single training example. The approach also allows for a highly parallelizable, multiple-affordance representation, and works at fast rates. The combination of the deep neural network that learns to estimate scene saliency with the one-shot geometric representation aligns well with the expectation that computational models for affordance estimation should be perceptually direct and economical.
Collapse
|
31
|
Editorial: Body Representations, Peripersonal Space, and the Self: Humans, Animals, Robots. Front Neurorobot 2020; 14:35. [PMID: 32612519 PMCID: PMC7308760 DOI: 10.3389/fnbot.2020.00035] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Accepted: 05/14/2020] [Indexed: 11/13/2022] Open
|
32
|
Fusing autonomy and sociality via embodied emergence and development of behaviour and cognition from fetal period. Philos Trans R Soc Lond B Biol Sci 2020; 374:20180031. [PMID: 30852992 PMCID: PMC6452254 DOI: 10.1098/rstb.2018.0031] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Human-centred AI/Robotics are quickly becoming important. Their core claim is that AI systems or robots must be designed and work for the benefits of humans with no harm or uneasiness. It essentially requires the realization of autonomy, sociality and their fusion at all levels of system organization, even beyond programming or pre-training. The biologically inspired core principle of such a system is described as the emergence and development of embodied behaviour and cognition. The importance of embodiment, emergence and continuous autonomous development is explained in the context of developmental robotics and dynamical systems view of human development. We present a hypothetical early developmental scenario that fills in the very beginning part of the comprehensive scenarios proposed in developmental robotics. Then our model and experiments on emergent embodied behaviour are presented. They consist of chaotic maps embedded in sensory–motor loops and coupled via embodiment. Behaviours that are consistent with embodiment and adaptive to environmental structure emerge within a few seconds without any external reward or learning. Next, our model and experiments on human fetal development are presented. A precise musculo-skeletal fetal body model is placed in a uterus model. Driven by spinal nonlinear oscillator circuits coupled together via embodiment, somatosensory signals are evoked and learned by a model of the cerebral cortex with 2.6 million neurons and 5.3 billion synapses. The model acquired cortical representations of self–body and multi-modal sensory integration. This work is important because it models very early autonomous development in realistic detailed human embodiment. Finally, discussions toward human-like cognition are presented including other important factors such as motivation, emotion, internal organs and genetic factors. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
Collapse
|
33
|
Abstract
What is a fundamental ability for cognitive development? Although many researchers have been addressing this question, no shared understanding has been acquired yet. We propose that predictive learning of sensorimotor signals plays a key role in early cognitive development. The human brain is known to represent sensorimotor signals in a predictive manner, i.e. it attempts to minimize prediction error between incoming sensory signals and top–down prediction. We extend this view and suggest that two mechanisms for minimizing prediction error lead to the development of cognitive abilities during early infancy. The first mechanism is to update an immature predictor. The predictor must be trained through sensorimotor experiences because it does not inherently have prediction ability. The second mechanism is to execute an action anticipated by the predictor. Interacting with other individuals often increases prediction error, which can be minimized by executing one's own action corresponding to others’ action. Our experiments using robotic systems replicated developmental dynamics observed in infants. The capabilities of self–other cognition and goal-directed action were acquired based on the first mechanism, whereas imitation and prosocial behaviours emerged based on the second mechanism. Our theory further provides a potential mechanism for autism spectrum condition. Atypical tolerance for prediction error is hypothesized to be a cause of perceptual and social difficulties. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
Collapse
|
34
|
|
35
|
A curious formulation robot enables the discovery of a novel protocell behavior. SCIENCE ADVANCES 2020; 6:eaay4237. [PMID: 32064348 PMCID: PMC6994213 DOI: 10.1126/sciadv.aay4237] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Accepted: 11/20/2019] [Indexed: 05/11/2023]
Abstract
We describe a chemical robotic assistant equipped with a curiosity algorithm (CA) that can efficiently explore the states a complex chemical system can exhibit. The CA-robot is designed to explore formulations in an open-ended way with no explicit optimization target. By applying the CA-robot to the study of self-propelling multicomponent oil-in-water protocell droplets, we are able to observe an order of magnitude more variety in droplet behaviors than possible with a random parameter search and given the same budget. We demonstrate that the CA-robot enabled the observation of a sudden and highly specific response of droplets to slight temperature changes. Six modes of self-propelled droplet motion were identified and classified using a time-temperature phase diagram and probed using a variety of techniques including NMR. This work illustrates how CAs can make better use of a limited experimental budget and significantly increase the rate of unpredictable observations, leading to new discoveries with potential applications in formulation chemistry.
Collapse
|
36
|
Robots Learning to Say “No”. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2019. [DOI: 10.1145/3359618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
“No” is one of the first ten words used by children and embodies the first form of linguistic negation. Despite its early occurrence, the details of its acquisition remain largely unknown. The circumstance that “no” cannot be construed as a label for perceptible objects or events puts it outside the scope of most modern accounts of language acquisition. Moreover, most symbol grounding architectures will struggle to ground the word due to its non-referential character. The presented work extends symbol grounding to encompass affect and motivation. In a study involving the child-like robot iCub, we attempt to illuminate the acquisition process of negation words. The robot is deployed in speech-wise unconstrained interaction with participants acting as its language teachers. The results corroborate the hypothesis that affect or volition plays a pivotal role in the acquisition process. Negation words are prosodically salient within prohibitive utterances and negative intent interpretations such that they can be easily isolated from the teacher’s speech signal. These words subsequently may be grounded in negative affective states. However, observations of the nature of prohibition and the temporal relationships between its linguistic and extra-linguistic components raise questions over the suitability of Hebbian-type algorithms for certain types of language grounding.
Collapse
|
37
|
Creating a Computable Cognitive Model of Visual Aesthetics for Automatic Aesthetics Evaluation of Robotic Dance Poses. Symmetry (Basel) 2019. [DOI: 10.3390/sym12010023] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Inspired by human dancers who can evaluate the aesthetics of their own dance poses through mirror observation, this paper presents a corresponding mechanism for robots to improve their cognitive and autonomous abilities. Essentially, the proposed mechanism is a brain-like intelligent system that is symmetrical to the visual cognitive nervous system of the human brain. Specifically, a computable cognitive model of visual aesthetics is developed using the two important aesthetic cognitive neural models of the human brain, which is then applied in the automatic aesthetics evaluation of robotic dance poses. Three kinds of features (color, shape and orientation) are extracted in a manner similar to the visual feature elements extracted by human brains. After applying machine learning methods in different feature combinations, machine aesthetics models are built for automatic evaluation of robotic dance poses. The simulation results show that our approach can process visual information effectively by cognitive computation, and achieved a very good evaluation performance of automatic aesthetics.
Collapse
|
38
|
|
39
|
|
40
|
Integrated Cognitive Architecture for Robot Learning of Action and Language. Front Robot AI 2019; 6:131. [PMID: 33501146 PMCID: PMC7805838 DOI: 10.3389/frobt.2019.00131] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Accepted: 11/13/2019] [Indexed: 11/13/2022] Open
Abstract
The manner in which humans learn, plan, and decide actions is a very compelling subject. Moreover, the mechanism behind high-level cognitive functions, such as action planning, language understanding, and logical thinking, has not yet been fully implemented in robotics. In this paper, we propose a framework for the simultaneously comprehension of concepts, actions, and language as a first step toward this goal. This can be achieved by integrating various cognitive modules and leveraging mainly multimodal categorization by using multilayered multimodal latent Dirichlet allocation (mMLDA). The integration of reinforcement learning and mMLDA enables actions based on understanding. Furthermore, the mMLDA, in conjunction with grammar learning and based on the Bayesian hidden Markov model (BHMM), allows the robot to verbalize its own actions and understand user utterances. We verify the potential of the proposed architecture through experiments using a real robot.
Collapse
|
41
|
|
42
|
On building a person: benchmarks for robotic personhood. J EXP THEOR ARTIF IN 2019. [DOI: 10.1080/0952813x.2019.1653386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
43
|
Simplifying the creation and management of utility models in continuous domains for cognitive robotics. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.07.093] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
44
|
Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots. PHILOSOPHIES 2019. [DOI: 10.3390/philosophies4030038] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system (MNS) that promotes the emergence of the concept of self (and others) scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in a broad sense is shown, followed by the limitation of the current progress of artificial intelligence (AI), focusing on deep learning. Next, artificial pain is introduced, along with its architectures in the early stage of self-inflicted experiences of pain, and later, in the sharing stage of the pain between self and others. Then, cognitive developmental robotics (CDR) is revisited for two important concepts—physical embodiment and social interaction, both of which help to shape conscious minds. Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, the issue of how robots (artificial systems) could be moral agents is addressed.
Collapse
|
45
|
How Cognitive Models of Human Body Experience Might Push Robotics. Front Neurorobot 2019; 13:14. [PMID: 31031614 PMCID: PMC6470381 DOI: 10.3389/fnbot.2019.00014] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 03/21/2019] [Indexed: 01/08/2023] Open
Abstract
In the last decades, cognitive models of multisensory integration in human beings have been developed and applied to model human body experience. Recent research indicates that Bayesian and connectionist models might push developments in various branches of robotics: assistive robotic devices might adapt to their human users aiming at increased device embodiment, e.g., in prosthetics, and humanoid robots could be endowed with human-like capabilities regarding their surrounding space, e.g., by keeping safe or socially appropriate distances to other agents. In this perspective paper, we review cognitive models that aim to approximate the process of human sensorimotor behavior generation, discuss their challenges and potentials in robotics, and give an overview of existing approaches. While model accuracy is still subject to improvement, human-inspired cognitive models support the understanding of how the modulating factors of human body experience are blended. Implementing the resulting insights in adaptive and learning control algorithms could help to taylor assistive devices to their user's individual body experience. Humanoid robots who develop their own body schema could consider this body knowledge in control and learn to optimize their physical interaction with humans and their environment. Cognitive body experience models should be improved in accuracy and online capabilities to achieve these ambitious goals, which would foster human-centered directions in various fields of robotics.
Collapse
|
46
|
Learning to learn with active adaptive perception. Neural Netw 2019; 115:30-49. [PMID: 30959321 DOI: 10.1016/j.neunet.2019.03.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 03/06/2019] [Accepted: 03/12/2019] [Indexed: 11/19/2022]
Abstract
Increasingly, autonomous agents will be required to operate on long-term missions. This will create a demand for general intelligence because feedback from a human operator may be sparse and delayed, and because not all behaviours can be prescribed. Deep neural networks and reinforcement learning methods can be applied in such environments but their fixed updating routines imply an inductive bias in learning spatio-temporal patterns, meaning some environments will be unsolvable. To address this problem, this paper proposes active adaptive perception, the ability of an architecture to learn when and how to modify and selectively utilise its perception module. To achieve this, a generic architecture based on a self-modifying policy (SMP) is proposed, and implemented using Incremental Self-improvement with the Success Story Algorithm. The architecture contrasts to deep reinforcement learning systems which follow fixed training strategies and earlier SMP studies which for perception relied either entirely on the working memory or on untrainable active perception instructions. One computationally cheap and one more expensive implementation are presented and compared to DRQN, an off-policy deep reinforcement learner using experience replay and Incremental Self-improvement, an SMP, on various non-episodic partially observable mazes. The results show that the simple instruction set leads to emergent strategies to avoid detracting corridors and rooms, and that the expensive implementation allows selectively ignoring perception where it is inaccurate.
Collapse
|
47
|
|
48
|
Development of numerical cognition in children and artificial systems: a review of the current knowledge and proposals for multi‐disciplinary research. COGNITIVE COMPUTATION AND SYSTEMS 2019. [DOI: 10.1049/ccs.2018.0004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
49
|
Ostensive-Cue Sensitive Learning and Exclusive Evaluation of Policies: A Solution for Measuring Contingency of Experiences for Social Developmental Robot. Front Robot AI 2019; 6:2. [PMID: 33501019 PMCID: PMC7806015 DOI: 10.3389/frobt.2019.00002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 01/11/2019] [Indexed: 11/13/2022] Open
Abstract
Joint attention related behaviors (JARBs) are some of the most important and basic cognitive functions for establishing successful communication in human interaction. It is learned gradually during the infant's developmental process, and enables the infant to purposefully improve his/her interaction with the others. To adopt such a developmental process for building an adaptive and social robot, previous studies proposed several contingency evaluation methods, by which an infant robot becomes able to sequentially learn some primary social skills. These skills included gaze following and social referencing, and could be acquired through interacting with a human caregiver model in a computer simulation. However, to implement such methods to a real-world robot, two major problems, that were not addressed in the previous research, have remained unresearched: (1) dependency of histogram of the observed events by the robot to each other, which increases the error of the internal calculation and consequently decreases the accuracy of contingency evaluation; and (2) unsynchronized teaching/learning phase of the teaching-caregiver and the learning-robot, which leads the robot and the caregiver not to understand the suitable timing for the learning and the teaching, respectively. In this paper, we address these two problems, and propose two algorithms in order to solve them: (1) exclusive evaluation of policies (XEP) for the former, and (2) ostensive-cue sensitive learning (OsL) for the latter. To show the effect of the proposed algorithms, we conducted a real-world human-robot interaction experiment with 48 subjects, and compared the performance of the learning robot with/without proposed algorithms. Our results show that adopting proposed algorithms improves the robot's performance in terms of learning efficiency, complexity of the learned behaviors, predictability of the robot, and even the result of the subjective evaluation of the participants about the intelligence of the robot as well as the quality of the interaction.
Collapse
|
50
|
A sentential cognitive system of robots for conversational human-robot interaction. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2018. [DOI: 10.3233/jifs-169845] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|