1
|
Northoff G, Fraser M, Griffiths J, Pinotsis DA, Panangaden P, Moran R, Friston K. Augmenting Human Selves Through Artificial Agents – Lessons From the Brain. Front Comput Neurosci 2022; 16:892354. [PMID: 35814345 PMCID: PMC9260143 DOI: 10.3389/fncom.2022.892354] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 05/13/2022] [Indexed: 01/04/2023] Open
Abstract
Much of current artificial intelligence (AI) and the drive toward artificial general intelligence (AGI) focuses on developing machines for functional tasks that humans accomplish. These may be narrowly specified tasks as in AI, or more general tasks as in AGI – but typically these tasks do not target higher-level human cognitive abilities, such as consciousness or morality; these are left to the realm of so-called “strong AI” or “artificial consciousness.” In this paper, we focus on how a machine can augment humans rather than do what they do, and we extend this beyond AGI-style tasks to augmenting peculiarly personal human capacities, such as wellbeing and morality. We base this proposal on associating such capacities with the “self,” which we define as the “environment-agent nexus”; namely, a fine-tuned interaction of brain with environment in all its relevant variables. We consider richly adaptive architectures that have the potential to implement this interaction by taking lessons from the brain. In particular, we suggest conjoining the free energy principle (FEP) with the dynamic temporo-spatial (TSD) view of neuro-mental processes. Our proposed integration of FEP and TSD – in the implementation of artificial agents – offers a novel, expressive, and explainable way for artificial agents to adapt to different environmental contexts. The targeted applications are broad: from adaptive intelligence augmenting agents (IA’s) that assist psychiatric self-regulation to environmental disaster prediction and personal assistants. This reflects the central role of the mind and moral decision-making in most of what we do as humans.
Collapse
Affiliation(s)
- Georg Northoff
- Mental Health Center, Zhejiang University School of Medicine, Hangzhou, China
- Department of Mind, Brain Imaging and Neuroethics, Institute of Mental Health Research, University of Ottawa, Ottawa, ON, Canada
- Centre for Research Ethics & Bioethics, Uppsala University, Uppsala, Sweden
| | - Maia Fraser
- Department of Mathematics and Statistics, University of Ottawa, Ottawa, ON, Canada
- *Correspondence: Maia Fraser,
| | - John Griffiths
- Centre for Addiction and Mental Health (CAMH), Toronto, ON, Canada
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada
| | - Dimitris A. Pinotsis
- Centre for Mathematical Neuroscience and Psychology, Department of Psychology, City, University of London, London, United Kingdom
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Prakash Panangaden
- Department of Computer Science, McGill University, Montreal, QC, Canada
- Montreal Institute for Learning Algorithms (MILA)., Montreal, QC, Canada
| | - Rosalyn Moran
- Centre for Neuroimaging Sciences, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, London, United Kingdom
- Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
2
|
Abstract
AbstractContemporary ethical analysis of Artificial Intelligence (AI) is growing rapidly. One of its most recognizable outcomes is the publication of a number of ethics guidelines that, intended to guide governmental policy, address issues raised by AI design, development, and implementation and generally present a set of recommendations. Here we propose two things: first, regarding content, since some of the applied issues raised by AI are related to fundamental questions about topics like intelligence, consciousness, and the ontological and ethical status of humans, among others, the treatment of these issues would benefit from interfacing with neuroethics that has been addressing those same issues in the context of brain research. Second, the identification and management of some of the practical ethical challenges raised by AI would be enriched by embracing the methodological resources used in neuroethics. In particular, we focus on the methodological distinction between conceptual and action-oriented neuroethical approaches. We argue that the normative (often principles-oriented) discussion about AI will benefit from further integration of conceptual analysis, including analysis of some operative assumptions, their meaning in different contexts, and their mutual relevance in order to avoid misplaced or disproportionate concerns and achieve a more realistic and useful approach to identifying and managing the emerging ethical issues.
Collapse
|
3
|
González-González CS, Gil-Iranzo RM, Paderewski-Rodríguez P. Human-Robot Interaction and Sexbots: A Systematic Literature Review. SENSORS 2020; 21:s21010216. [PMID: 33396356 PMCID: PMC7795467 DOI: 10.3390/s21010216] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 12/13/2020] [Accepted: 12/26/2020] [Indexed: 11/29/2022]
Abstract
At present, sexual robots have become a new paradigm of social robots. In this paper, we developed a systematic literature review about sexual robots (sexbots). To do this, we used the Scopus and WoS databases to answer different research questions regarding the design, interaction, and gender and ethical approaches from 1980 until 2020. In our review, we found a male bias in this discipline, and in recent years, articles have shown that user opinion has become more relevant. Some insights and recommendations on gender and ethics in designing sexual robots were also made.
Collapse
Affiliation(s)
- Carina Soledad González-González
- Departamento de Ingeniería Informática y de Sistemas, Escuela de Ingeniería y Tecnología, Universidad de La Laguna, 38204 La Laguna, Spain
- Correspondence:
| | - Rosa María Gil-Iranzo
- Departamento de Informática e Ingeniería Industrial, Escuela Politécnica Superior, Universitat de Lleida, 25001 LLeida, Spain;
| | - Patricia Paderewski-Rodríguez
- Departmento de Lenguajes y Sistemas Informáticos, Escuela Técnica Superior de Ingenierías Informática y de Telecomunicación, Universidad de Granada, 18071 Granada, Spain;
| |
Collapse
|
4
|
Abstract
AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI's functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public's anthropomorphic attitudes and some of their ethical consequences (particularly in the context of social robots and their interaction with humans) have been widely discussed in the literature. However, how anthropomorphism permeates AI research itself (i.e., in the very language of computer scientists, designers, and programmers), and what the epistemological and ethical consequences of this might be have received less attention. In this paper we explore this issue. We first set the methodological/theoretical stage, making a distinction between a normative and a conceptual approach to the issues. Next, after a brief analysis of anthropomorphism and its manifestations in the public, we explore its presence within AI research with a particular focus on brain-inspired AI. Finally, on the basis of our analysis, we identify some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community, thus reinforcing the need of complementing the practical with a conceptual analysis.
Collapse
Affiliation(s)
- Arleen Salles
- Uppsala University.,Centro de Investigaciones Filosoficas (CIF)
| | | | - Michele Farisco
- Uppsala University.,Biogem, Biology and Molecular Genetics Institute
| |
Collapse
|