1
|
Luppi AI. Extending the common currency of the mind beyond the brain: Comment on "Kinematic coding: Measuring information in naturalistic behaviour" by Becchio et al. Phys Life Rev 2025; 53:300-302. [PMID: 40288013 DOI: 10.1016/j.plrev.2025.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2025] [Accepted: 04/21/2025] [Indexed: 04/29/2025]
Affiliation(s)
- Andrea I Luppi
- Department of Psychiatry, University of Oxford, Oxford, UK; Montreal Neurological Institute, McGill University, Montreal, QC, Canada; St John's College, University of Cambridge, Cambridge, UK.
| |
Collapse
|
2
|
Devillers B, Maytie L, VanRullen R. Semi-Supervised Multimodal Representation Learning Through a Global Workspace. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:7843-7857. [PMID: 38954575 DOI: 10.1109/tnnls.2024.3416701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
Recent deep learning models can efficiently combine inputs from different modalities (e.g., images and text) and learn to align their latent representations or to translate signals from one domain to another (as in image captioning or text-to-image generation). However, current approaches mainly rely on brute-force supervised training over large multimodal datasets. In contrast, humans (and other animals) can learn useful multimodal representations from only sparse experience with matched cross-modal data. Here, we evaluate the capabilities of a neural network architecture inspired by the cognitive notion of a "global workspace" (GW): a shared representation for two (or more) input modalities. Each modality is processed by a specialized system (pretrained on unimodal data and subsequently frozen). The corresponding latent representations are then encoded to and decoded from a single shared workspace. Importantly, this architecture is amenable to self-supervised training via cycle-consistency: encoding-decoding sequences should approximate the identity function. For various pairings of vision-language modalities and across two datasets of varying complexity, we show that such an architecture can be trained to align and translate between two modalities with very little need for matched data (from four to seven times less than a fully supervised approach). The GW representation can be used advantageously for downstream classification and cross-modal retrieval tasks and for robust transfer learning. Ablation studies reveal that both the shared workspace and the self-supervised cycle-consistency training are critical to the system's performance.
Collapse
|
3
|
Gorman KR, Wrightson-Hester A, Landman M, Mansell W. How can virtual reality help to understand consciousness? A thematic analysis of students' experiences in a novel virtual environment. Conscious Cogn 2025; 127:103792. [PMID: 39644840 DOI: 10.1016/j.concog.2024.103792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 11/17/2024] [Accepted: 11/24/2024] [Indexed: 12/09/2024]
Abstract
Research on consciousness typically presents stimuli and records the responses that follow, to infer the intervening processes. Yet, VR allows ecological validity by giving the user freedom to continuously control their sensory input across three spatial dimensions via head and eye movement. We designed a virtual world in which the angle of view relates to the information complexity of the sensory input. We assessed its acceptability and feasibility, and explored the first-person experience. Ten university students were immersed in two different novel environments, then a semi-structured interview, guided by first-person video footage of the VR experience, elicited participants' reports. The methodology proved feasible, and a thematic analysis was consistent with Mansell's (2024) control theory perspective, and to a lesser degree, Integrated Information Theory (IIT) and Global Workspace Theory (GWT). We conclude that novel virtual environments provide an accessible, dynamic and valid way to gather evidence regarding different theories of consciousness.
Collapse
Affiliation(s)
- Keelan R Gorman
- Curtin enAble Institute, School of Population Health, Curtin University, Bentley, Western Australia, Australia
| | - Aimee Wrightson-Hester
- Curtin enAble Institute, School of Population Health, Curtin University, Bentley, Western Australia, Australia.
| | - Michael Landman
- Curtin enAble Institute, School of Population Health, Curtin University, Bentley, Western Australia, Australia.
| | - Warren Mansell
- Curtin enAble Institute, School of Population Health, Curtin University, Bentley, Western Australia, Australia.
| |
Collapse
|
4
|
Farisco M, Evers K, Changeux JP. Is artificial consciousness achievable? Lessons from the human brain. Neural Netw 2024; 180:106714. [PMID: 39270349 DOI: 10.1016/j.neunet.2024.106714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 07/29/2024] [Accepted: 09/06/2024] [Indexed: 09/15/2024]
Abstract
We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word "consciousness" for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.
Collapse
Affiliation(s)
- Michele Farisco
- Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden; Biogem, Biology and Molecular Genetics Institute, Ariano Irpino (AV), Italy.
| | - Kathinka Evers
- Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| | | |
Collapse
|
5
|
Luppi AI, Mediano PAM, Rosas FE, Allanson J, Pickard J, Carhart-Harris RL, Williams GB, Craig MM, Finoia P, Owen AM, Naci L, Menon DK, Bor D, Stamatakis EA. A synergistic workspace for human consciousness revealed by Integrated Information Decomposition. eLife 2024; 12:RP88173. [PMID: 39022924 PMCID: PMC11257694 DOI: 10.7554/elife.88173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
How is the information-processing architecture of the human brain organised, and how does its organisation support consciousness? Here, we combine network science and a rigorous information-theoretic notion of synergy to delineate a 'synergistic global workspace', comprising gateway regions that gather synergistic information from specialised modules across the human brain. This information is then integrated within the workspace and widely distributed via broadcaster regions. Through functional MRI analysis, we show that gateway regions of the synergistic workspace correspond to the human brain's default mode network, whereas broadcasters coincide with the executive control network. We find that loss of consciousness due to general anaesthesia or disorders of consciousness corresponds to diminished ability of the synergistic workspace to integrate information, which is restored upon recovery. Thus, loss of consciousness coincides with a breakdown of information integration within the synergistic workspace of the human brain. This work contributes to conceptual and empirical reconciliation between two prominent scientific theories of consciousness, the Global Neuronal Workspace and Integrated Information Theory, while also advancing our understanding of how the human brain supports consciousness through the synergistic integration of information.
Collapse
Affiliation(s)
- Andrea I Luppi
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- University Division of Anaesthesia, School of Clinical Medicine, University of CambridgeCambridgeUnited Kingdom
| | - Pedro AM Mediano
- Department of Psychology, University of CambridgeCambridgeUnited Kingdom
| | - Fernando E Rosas
- Center for Psychedelic Research, Department of Brain Science, Imperial College LondonLondonUnited Kingdom
- Center for Complexity Science, Imperial College LondonLondonUnited Kingdom
- Data Science Institute, Imperial College LondonLondonUnited Kingdom
| | - Judith Allanson
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- Department of Neurosciences, Cambridge University Hospitals NHS Foundation, Addenbrooke's HospitalCambridgeUnited Kingdom
| | - John Pickard
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- Wolfson Brain Imaging Centre, University of CambridgeCambridgeUnited Kingdom
- Division of Neurosurgery, School of Clinical Medicine, University of Cambridge, Addenbrooke's HospitalCambridgeUnited Kingdom
| | - Robin L Carhart-Harris
- Center for Psychedelic Research, Department of Brain Science, Imperial College LondonLondonUnited Kingdom
- Psychedelics Division - Neuroscape, Department of Neurology, University of CaliforniaSan FranciscoUnited States
| | - Guy B Williams
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- Wolfson Brain Imaging Centre, University of CambridgeCambridgeUnited Kingdom
| | - Michael M Craig
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
- University Division of Anaesthesia, School of Clinical Medicine, University of CambridgeCambridgeUnited Kingdom
| | - Paola Finoia
- Department of Clinical Neurosciences, University of CambridgeCambridgeUnited Kingdom
| | - Adrian M Owen
- Department of Psychology and Department of Physiology and Pharmacology, The Brain and Mind Institute, University of Western OntarioLondonCanada
| | - Lorina Naci
- Trinity College Institute of Neuroscience, School of Psychology, Lloyd Building, Trinity CollegeDublinIreland
| | - David K Menon
- University Division of Anaesthesia, School of Clinical Medicine, University of CambridgeCambridgeUnited Kingdom
- Wolfson Brain Imaging Centre, University of CambridgeCambridgeUnited Kingdom
| | - Daniel Bor
- Department of Psychology, University of CambridgeCambridgeUnited Kingdom
| | - Emmanuel A Stamatakis
- University Division of Anaesthesia, School of Clinical Medicine, University of CambridgeCambridgeUnited Kingdom
| |
Collapse
|
6
|
Dossa RFJ, Arulkumaran K, Juliani A, Sasai S, Kanai R. Design and evaluation of a global workspace agent embodied in a realistic multimodal environment. Front Comput Neurosci 2024; 18:1352685. [PMID: 38948336 PMCID: PMC11211627 DOI: 10.3389/fncom.2024.1352685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 05/20/2024] [Indexed: 07/02/2024] Open
Abstract
As the apparent intelligence of artificial neural networks (ANNs) advances, they are increasingly likened to the functional networks and information processing capabilities of the human brain. Such comparisons have typically focused on particular modalities, such as vision or language. The next frontier is to use the latest advances in ANNs to design and investigate scalable models of higher-level cognitive processes, such as conscious information access, which have historically lacked concrete and specific hypotheses for scientific evaluation. In this work, we propose and then empirically assess an embodied agent with a structure based on global workspace theory (GWT) as specified in the recently proposed "indicator properties" of consciousness. In contrast to prior works on GWT which utilized single modalities, our agent is trained to navigate 3D environments based on realistic audiovisual inputs. We find that the global workspace architecture performs better and more robustly at smaller working memory sizes, as compared to a standard recurrent architecture. Beyond performance, we perform a series of analyses on the learned representations of our architecture and share findings that point to task complexity and regularization being essential for feature learning and the development of meaningful attentional patterns within the workspace.
Collapse
|
7
|
Kanai R, Fujisawa I. Toward a universal theory of consciousness. Neurosci Conscious 2024; 2024:niae022. [PMID: 38826771 PMCID: PMC11141593 DOI: 10.1093/nc/niae022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 05/08/2024] [Accepted: 05/15/2024] [Indexed: 06/04/2024] Open
Abstract
While falsifiability has been broadly discussed as a desirable property of a theory of consciousness, in this paper, we introduce the meta-theoretic concept of "Universality" as an additional desirable property for a theory of consciousness. The concept of universality, often assumed in physics, posits that the fundamental laws of nature are consistent and apply equally everywhere in the universe and remain constant over time. This assumption is crucial in science, acting as a guiding principle for developing and testing theories. When applied to theories of consciousness, universality can be defined as the ability of a theory to determine whether any fully described dynamical system is conscious or non-conscious. Importantly, for a theory to be universal, the determinant of consciousness needs to be defined as an intrinsic property of a system as opposed to replying on the interpretation of the external observer. The importance of universality originates from the consideration that given that consciousness is a natural phenomenon, it could in principle manifest in any physical system that satisfies a certain set of conditions whether it is biological or non-biological. To date, apart from a few exceptions, most existing theories do not possess this property. Instead, they tend to make predictions as to the neural correlates of consciousness based on the interpretations of brain functions, which makes those theories only applicable to brain-centric systems. While current functionalist theories of consciousness tend to be heavily reliant on our interpretations of brain functions, we argue that functionalist theories could be converted to a universal theory by specifying mathematical formulations of the constituent concepts. While neurobiological and functionalist theories retain their utility in practice, we will eventually need a universal theory to fully explain why certain types of systems possess consciousness.
Collapse
Affiliation(s)
- Ryota Kanai
- President Office, Araya, Inc., Sanpo Sakuma Building, 1-11 Kanda Sakuma-cho, Chiyoda-ku, Tokyo 101-0025, Japan
| | - Ippei Fujisawa
- President Office, Araya, Inc., Sanpo Sakuma Building, 1-11 Kanda Sakuma-cho, Chiyoda-ku, Tokyo 101-0025, Japan
| |
Collapse
|
8
|
Bayne T, Seth AK, Massimini M, Shepherd J, Cleeremans A, Fleming SM, Malach R, Mattingley JB, Menon DK, Owen AM, Peters MAK, Razi A, Mudrik L. Tests for consciousness in humans and beyond. Trends Cogn Sci 2024; 28:454-466. [PMID: 38485576 DOI: 10.1016/j.tics.2024.01.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 01/24/2024] [Accepted: 01/26/2024] [Indexed: 05/12/2024]
Abstract
Which systems/organisms are conscious? New tests for consciousness ('C-tests') are urgently needed. There is persisting uncertainty about when consciousness arises in human development, when it is lost due to neurological disorders and brain injury, and how it is distributed in nonhuman species. This need is amplified by recent and rapid developments in artificial intelligence (AI), neural organoids, and xenobot technology. Although a number of C-tests have been proposed in recent years, most are of limited use, and currently we have no C-tests for many of the populations for which they are most critical. Here, we identify challenges facing any attempt to develop C-tests, propose a multidimensional classification of such tests, and identify strategies that might be used to validate them.
Collapse
Affiliation(s)
- Tim Bayne
- Department of Philosophy, Monash University, Melbourne, VIC, Australia; Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada.
| | - Anil K Seth
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; Sussex Centre for Consciousness Science and School of Engineering and Informatics, University of Sussex, Brighton, UK
| | - Marcello Massimini
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; Department of Biomedical and Clinical Science, University of Milan, Milan, Italy; IRCCS Fondazione Don Gnocchi
| | - Joshua Shepherd
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; Universitat Autònoma de Barcelona, Belleterra, Spain; ICREA, Barcelona, Spain
| | - Axel Cleeremans
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; Center for Research in Cognition and Neuroscience, ULB Institute of Neuroscience, Université libre de Bruxelles, Brussels, Belgium
| | - Stephen M Fleming
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; Department of Experimental Psychology, University College London, London, UK; Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Rafael Malach
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; The Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Jason B Mattingley
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; Queensland Brain Institute and School of Psychology, The University of Queensland, Brisbane, QLD, Australia
| | - David K Menon
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; University of Cambridge, Cambridge, UK
| | - Adrian M Owen
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; University of Western Ontario, London, ON, Canada
| | - Megan A K Peters
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; University of California, Irvine, Irvine, CA, USA
| | - Adeel Razi
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia; Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Liad Mudrik
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada; School of Psychological Sciences and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
9
|
Atta-Ur-Rahman. Protein Folding and Molecular Basis of Memory: Molecular Vibrations and Quantum Entanglement as Basis of Consciousness. Curr Med Chem 2024; 31:258-265. [PMID: 37424348 DOI: 10.2174/0929867331666230707123345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Revised: 06/28/2023] [Accepted: 06/28/2023] [Indexed: 07/11/2023]
Affiliation(s)
- Atta-Ur-Rahman
- Kings College, University of Cambridge, Cambridge CB2 1st, United Kingdom
- H.E.J. Research Institute of Chemistry, International Center for Chemical and Biological Sciences, University of Karachi, Karachi 75270, Pakistan
| |
Collapse
|
10
|
Aru J, Larkum ME, Shine JM. The feasibility of artificial consciousness through the lens of neuroscience. Trends Neurosci 2023; 46:1008-1017. [PMID: 37863713 DOI: 10.1016/j.tins.2023.09.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/23/2023] [Accepted: 09/27/2023] [Indexed: 10/22/2023]
Abstract
Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.
Collapse
Affiliation(s)
- Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu, Estonia.
| | - Matthew E Larkum
- Institute of Biology, Humboldt University Berlin, Berlin, Germany.
| | - James M Shine
- Brain and Mind Center, The University of Sydney, Sydney, Australia.
| |
Collapse
|
11
|
SAITO YUYA, KAMAGATA KOJI, AKASHI TOSHIAKI, WADA AKIHIKO, SHIMOJI KEIGO, HORI MASAAKI, KUWABARA MASARU, KANAI RYOTA, AOKI SHIGEKI. Review of Performance Improvement of a Noninvasive Brain-computer Interface in Communication and Motor Control for Clinical Applications. JUNTENDO IJI ZASSHI = JUNTENDO MEDICAL JOURNAL 2023; 69:319-326. [PMID: 38846633 PMCID: PMC10984355 DOI: 10.14789/jmj.jmj23-0011-r] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/16/2023] [Indexed: 06/09/2024]
Abstract
Brain-computer interfaces (BCI) enable direct communication between the brain and a computer or other external devices. They can extend a person's degree of freedom by either strengthening or substituting the human peripheral working capacity. Moreover, their potential clinical applications in medical fields include rehabilitation, affective computing, communication, and control. Over the last decade, noninvasive BCI systems such as electroencephalogram (EEG) have progressed from simple statistical models to deep learning models, with performance improvement over time and enhanced computational power. However, numerous challenges pertaining to the clinical use of BCI systems remain, e.g., the lack of sufficient data to learn more possible features for robust and reliable classification. However, compared with fields such as computer vision and speech recognition, the training samples in the medical BCI field are limited as they target patients who face difficulty generating EEG data compared with healthy control. Because deep learning models incorporate several parameters, they require considerably more data than other conventional methods. Thus, deep learning models have not been thoroughly leveraged in medical BCI. This study summarizes the state-of-the-art progress of the BCI system over the last decade, highlighting critical challenges and solutions.
Collapse
Affiliation(s)
| | - KOJI KAMAGATA
- Corresponding author: Koji Kamagata, Department of Radiology, Juntendo University Graduate School of Medicine 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan, TEL: +81-3-5802-1230 FAX: +81-3-3816-0958 E-mail:
| | | | | | | | | | | | | | | |
Collapse
|
12
|
Nikolić D. Where is the mind within the brain? Transient selection of subnetworks by metabotropic receptors and G protein-gated ion channels. Comput Biol Chem 2023; 103:107820. [PMID: 36724606 DOI: 10.1016/j.compbiolchem.2023.107820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 01/16/2023] [Accepted: 01/17/2023] [Indexed: 01/20/2023]
Abstract
Perhaps the most important question posed by brain research is: How the brain gives rise to the mind. To answer this question, we have primarily relied on the connectionist paradigm: The brain's entire knowledge and thinking skills are thought to be stored in the connections; and the mental operations are executed by network computations. I propose here an alternative paradigm: Our knowledge and skills are stored in metabotropic receptors (MRs) and the G protein-gated ion channels (GPGICs). Here, mental operations are assumed to be executed by the functions of MRs and GPGICs. As GPGICs have the capacity to close or open branches of dendritic trees and axon terminals, their states transiently re-route neural activity throughout the nervous system. First, MRs detect ligands that signal the need to activate GPGICs. Next, GPGICs transiently select a subnetwork within the brain. The process of selecting this new subnetwork is what constitutes a mental operation - be it in a form of directed attention, perception or making a decision. Synaptic connections and network computations play only a secondary role, supporting MRs and GPGICs. According to this new paradigm, the mind emerges within the brain as the function of MRs and GPGICs whose primary function is to continually select the pathways over which neural activity will be allowed to pass. It is argued that MRs and GPGICs solve the scaling problem of intelligence from which the connectionism paradigm suffers.
Collapse
Affiliation(s)
- Danko Nikolić
- Department of Psychiatry, Psychosomatic Medicine and Psychotherapy, University Hospital Frankfurt, Germany; evocenta GmbH, Germany; Robots Go Mental UG, Germany.
| |
Collapse
|
13
|
Lima B, Florentino MM, Fiorani M, Soares JGM, Schmidt KE, Neuenschwander S, Baron J, Gattass R. Cortical maps as a fundamental neural substrate for visual representation. Prog Neurobiol 2023; 224:102424. [PMID: 36828036 DOI: 10.1016/j.pneurobio.2023.102424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 01/20/2023] [Accepted: 02/18/2023] [Indexed: 02/25/2023]
Abstract
Visual perception is the product of serial hierarchical processing, parallel processing, and remapping on a dynamic network involving several topographically organized cortical visual areas. Here, we will focus on the topographical organization of cortical areas and the different kinds of visual maps found in the primate brain. We will interpret our findings in light of a broader representational framework for perception. Based on neurophysiological data, our results do not support the notion that vision can be explained by a strict representational model, where the objective visual world is faithfully represented in our brain. On the contrary, we find strong evidence that vision is an active and constructive process from the very initial stages taking place in the eye and from the very initial stages of our development. A constructive interplay between perceptual and motor systems (e.g., during saccadic eye movements) is actively learnt from early infancy and ultimately provides our fluid stable visual perception of the world.
Collapse
Affiliation(s)
- Bruss Lima
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil
| | - Maria M Florentino
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil
| | - Mario Fiorani
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil
| | - Juliana G M Soares
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil
| | - Kerstin E Schmidt
- Instituto do Cérebro, Universidade Federal do Rio Grande do Norte, Natal, RN 59056-450, Brazil
| | - Sergio Neuenschwander
- Instituto do Cérebro, Universidade Federal do Rio Grande do Norte, Natal, RN 59056-450, Brazil
| | - Jerome Baron
- Departamento de Fisiologia e Biofísica, Instituto de Ciências Biológicas, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, Brazil
| | - Ricardo Gattass
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil.
| |
Collapse
|
14
|
St. Clair R, Coward LA, Schneider S. Leveraging conscious and nonconscious learning for efficient AI. Front Comput Neurosci 2023; 17:1090126. [PMID: 37034440 PMCID: PMC10076654 DOI: 10.3389/fncom.2023.1090126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 03/06/2023] [Indexed: 04/11/2023] Open
Abstract
Various interpretations of the literature detailing the neural basis of learning have in part led to disagreements concerning how consciousness arises. Further, artificial learning model design has suffered in replicating intelligence as it occurs in the human brain. Here, we present a novel learning model, which we term the "Recommendation Architecture (RA) Model" from prior theoretical works proposed by Coward, using a dual-learning approach featuring both consequence feedback and non-consequence feedback. The RA model is tested on a categorical learning task where no two inputs are the same throughout training and/or testing. We compare this to three consequence feedback only models based on backpropagation and reinforcement learning. Results indicate that the RA model learns novelty more efficiently and can accurately return to prior learning after new learning with less computational resources expenditure. The final results of the study show that consequence feedback as interpretation, not creation, of cortical activity creates a learning style more similar to human learning in terms of resource efficiency. Stable information meanings underlie conscious experiences. The work provided here attempts to link the neural basis of nonconscious and conscious learning while providing early results for a learning protocol more similar to human brains than is currently available.
Collapse
Affiliation(s)
- Rachel St. Clair
- Simuli Inc., Delray Beach, FL, United States
- *Correspondence: Rachel St. Clair
| | - L. Andrew Coward
- College of Engineering and Computer Science, Australian National University, Canberra, ACT, Australia
| | - Susan Schneider
- Center for Future Mind, College of Arts and Letters, Florida Atlantic University, Boca Raton, FL, United States
| |
Collapse
|
15
|
Wang J, Chen Y, Dong Z, Gao M, Lin H, Miao Q. SABV-Depth: A biologically inspired deep learning network for monocular depth estimation. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
16
|
Computational Thinking Training and Deep Learning Evaluation Model Construction Based on Scratch Modular Programming Course. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:3760957. [PMID: 36873382 PMCID: PMC9977527 DOI: 10.1155/2023/3760957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/11/2022] [Accepted: 08/17/2022] [Indexed: 02/25/2023]
Abstract
To improve the algorithmic dimension, critical thinking, and problem-solving ability of computational thinking (CT) in students' programming courses, first, a programming teaching model is constructed based on the scratch modular programming course. Secondly, the design process of the teaching model and the problem-solving model of visual programming are studied. Finally, a deep learning (DL) evaluation model is constructed, and the effectiveness of the designed teaching model is analyzed and evaluated. The T-test result of paired samples of CT is t = -2.08, P < 0.05. There are significant differences in the results of the two tests, and the designed teaching model can cause changes in students' CT abilities. The results reveal that the effectiveness of the teaching model based on scratch modular programming has been verified on the basis of experiments. The post-test values of the dimensions of algorithmic thinking, critical thinking, collaborative thinking, and problem-solving thinking are all higher than the pretest values, and there are individual differences. The P values are all less than 0.05, which testifies that the CT training of the designed teaching model has the algorithm dimension, critical thinking, collaborative thinking, and problem-solving ability of students' CT. The post-test values of cognitive load are all lower than the pretest values, indicating that the model has a certain positive effect on reducing cognitive load, and there is a significant difference between the pretest and post-test. In the dimension of creative thinking, the P value is 0.218, and there is no obvious difference in the dimensions of creativity and self-efficacy. It can be found from the DL evaluation that the average value of the DL knowledge and skills dimensions is greater than 3.5, and college students can reach a certain standard level in terms of knowledge and skills. The mean value of the process and method dimensions is about 3.1, and the mean value of the emotional attitudes and values is 2.77. The process and method, as well as emotional attitude and values, need to be strengthened. The DL level of college students is relatively low, and it is necessary to improve their DL level from the perspective of knowledge and skills, processes and methods, emotional attitudes and values. This research makes up for the shortcomings of traditional programming and design software to a certain extent. It has a certain reference value for researchers and teachers to carry out programming teaching practice.
Collapse
|
17
|
Yurchenko SB. From the origins to the stream of consciousness and its neural correlates. Front Integr Neurosci 2022; 16:928978. [PMID: 36407293 PMCID: PMC9672924 DOI: 10.3389/fnint.2022.928978] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 10/12/2022] [Indexed: 09/22/2023] Open
Abstract
There are now dozens of very different theories of consciousness, each somehow contributing to our understanding of its nature. The science of consciousness needs therefore not new theories but a general framework integrating insights from those, yet not making it a still-born "Frankenstein" theory. First, the framework must operate explicitly on the stream of consciousness, not on its static description. Second, this dynamical account must also be put on the evolutionary timeline to explain the origins of consciousness. The Cognitive Evolution Theory (CET), outlined here, proposes such a framework. This starts with the assumption that brains have primarily evolved as volitional subsystems of organisms, inherited from primitive (fast and random) reflexes of simplest neural networks, only then resembling error-minimizing prediction machines. CET adopts the tools of critical dynamics to account for metastability, scale-free avalanches, and self-organization which are all intrinsic to brain dynamics. This formalizes the stream of consciousness as a discrete (transitive, irreflexive) chain of momentary states derived from critical brain dynamics at points of phase transitions and mapped then onto a state space as neural correlates of a particular conscious state. The continuous/discrete dichotomy appears naturally between the brain dynamics at the causal level and conscious states at the phenomenal level, each volitionally triggered from arousal centers of the brainstem and cognitively modulated by thalamocortical systems. Their objective observables can be entropy-based complexity measures, reflecting the transient level or quantity of consciousness at that moment.
Collapse
|
18
|
Volzhenin K, Changeux JP, Dumas G. Multilevel development of cognitive abilities in an artificial neural network. Proc Natl Acad Sci U S A 2022; 119:e2201304119. [PMID: 36122214 PMCID: PMC9522351 DOI: 10.1073/pnas.2201304119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 08/16/2022] [Indexed: 11/18/2022] Open
Abstract
Several neuronal mechanisms have been proposed to account for the formation of cognitive abilities through postnatal interactions with the physical and sociocultural environment. Here, we introduce a three-level computational model of information processing and acquisition of cognitive abilities. We propose minimal architectural requirements to build these levels, and how the parameters affect their performance and relationships. The first sensorimotor level handles local nonconscious processing, here during a visual classification task. The second level or cognitive level globally integrates the information from multiple local processors via long-ranged connections and synthesizes it in a global, but still nonconscious, manner. The third and cognitively highest level handles the information globally and consciously. It is based on the global neuronal workspace (GNW) theory and is referred to as the conscious level. We use the trace and delay conditioning tasks to, respectively, challenge the second and third levels. Results first highlight the necessity of epigenesis through the selection and stabilization of synapses at both local and global scales to allow the network to solve the first two tasks. At the global scale, dopamine appears necessary to properly provide credit assignment despite the temporal delay between perception and reward. At the third level, the presence of interneurons becomes necessary to maintain a self-sustained representation within the GNW in the absence of sensory input. Finally, while balanced spontaneous intrinsic activity facilitates epigenesis at both local and global scales, the balanced excitatory/inhibitory ratio increases performance. We discuss the plausibility of the model in both neurodevelopmental and artificial intelligence terms.
Collapse
Affiliation(s)
- Konstantin Volzhenin
- Neuroscience Department, Institut Pasteur, 75015 Paris, France
- Laboratory of Computational and Quantitative Biology, Sorbonne Université, 75005 Paris, France
| | | | - Guillaume Dumas
- Neuroscience Department, Institut Pasteur, 75015 Paris, France
- Mila - Quebec Artificial Intelligence Institute, Centre Hospitalier Universitaire Sainte-Justine Research Center, Department of Psychiatry, Université de Montréal, Montréal, QC H3T 1C5, Canada
| |
Collapse
|
19
|
Wahbeh H, Radin D, Cannard C, Delorme A. What if consciousness is not an emergent property of the brain? Observational and empirical challenges to materialistic models. Front Psychol 2022; 13:955594. [PMID: 36160593 PMCID: PMC9490228 DOI: 10.3389/fpsyg.2022.955594] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 08/19/2022] [Indexed: 12/03/2022] Open
Abstract
The nature of consciousness is considered one of science's most perplexing and persistent mysteries. We all know the subjective experience of consciousness, but where does it arise? What is its purpose? What are its full capacities? The assumption within today's neuroscience is that all aspects of consciousness arise solely from interactions among neurons in the brain. However, the origin and mechanisms of qualia (i.e., subjective or phenomenological experience) are not understood. David Chalmers coined the term "the hard problem" to describe the difficulties in elucidating the origins of subjectivity from the point of view of reductive materialism. We propose that the hard problem arises because one or more assumptions within a materialistic worldview are either wrong or incomplete. If consciousness entails more than the activity of neurons, then we can contemplate new ways of thinking about the hard problem. This review examines phenomena that apparently contradict the notion that consciousness is exclusively dependent on brain activity, including phenomena where consciousness appears to extend beyond the physical brain and body in both space and time. The mechanisms underlying these "non-local" properties are vaguely suggestive of quantum entanglement in physics, but how such effects might manifest remains highly speculative. The existence of these non-local effects appears to support the proposal that post-materialistic models of consciousness may be required to break the conceptual impasse presented by the hard problem of consciousness.
Collapse
Affiliation(s)
- Helané Wahbeh
- Research Department, Institute of Noetic Sciences, Petaluma, CA, United States
| | - Dean Radin
- Research Department, Institute of Noetic Sciences, Petaluma, CA, United States
| | - Cedric Cannard
- Research Department, Institute of Noetic Sciences, Petaluma, CA, United States
| | - Arnaud Delorme
- Research Department, Institute of Noetic Sciences, Petaluma, CA, United States
- Swartz Center for Computational Neuroscience, Institute of Neural Computation, University of California, San Diego, San Diego, CA, United States
| |
Collapse
|
20
|
Abstract
Rapid advances in neuroscience have provided remarkable breakthroughs in understanding the brain on many fronts. Although promising, the role of these advancements in solving the problem of consciousness is still unclear. Based on technologies conceivably within the grasp of modern neuroscience, we discuss a thought experiment in which neural activity, in the form of action potentials, is initially recorded from all the neurons in a participant's brain during a conscious experience and then played back into the same neurons. We consider whether this artificial replay can reconstitute a conscious experience. The possible outcomes of this experiment unravel hidden costs and pitfalls in understanding consciousness from the neurosciences' perspective and challenge the conventional wisdom that causally links action potentials and consciousness.
Collapse
Affiliation(s)
- Albert Gidon
- Institute of Biology, Humboldt University Berlin, Berlin, Germany
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu, Estonia
| | - Matthew Evan Larkum
- Institute of Biology, Humboldt University Berlin, Berlin, Germany
- Neurocure Center for Excellence, Charité Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
21
|
Blum L, Blum M. A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine. Proc Natl Acad Sci U S A 2022; 119:e2115934119. [PMID: 35594400 PMCID: PMC9171770 DOI: 10.1073/pnas.2115934119] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 03/22/2022] [Indexed: 11/25/2022] Open
Abstract
This paper examines consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature.
Collapse
Affiliation(s)
- Lenore Blum
- Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213
- Electrical Engineering and Computer Science (EECS), University of California, Berkeley, CA 94720
| | - Manuel Blum
- Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213
- Electrical Engineering and Computer Science (EECS), University of California, Berkeley, CA 94720
| |
Collapse
|
22
|
Pepperell R. Does Machine Understanding Require Consciousness? Front Syst Neurosci 2022; 16:788486. [PMID: 35664685 PMCID: PMC9159796 DOI: 10.3389/fnsys.2022.788486] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Accepted: 04/12/2022] [Indexed: 11/24/2022] Open
Abstract
This article addresses the question of whether machine understanding requires consciousness. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Here I draw a distinction between natural, artificial, and machine understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I propose a hypothesis that might help to explain why consciousness is important to understanding. In closing, I suggest that progress toward implementing human-like understanding in machines—machine understanding—may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates.
Collapse
|
23
|
Abstract
Recent years have seen a blossoming of theories about the biological and physical basis of consciousness. Good theories guide empirical research, allowing us to interpret data, develop new experimental techniques and expand our capacity to manipulate the phenomenon of interest. Indeed, it is only when couched in terms of a theory that empirical discoveries can ultimately deliver a satisfying understanding of a phenomenon. However, in the case of consciousness, it is unclear how current theories relate to each other, or whether they can be empirically distinguished. To clarify this complicated landscape, we review four prominent theoretical approaches to consciousness: higher-order theories, global workspace theories, re-entry and predictive processing theories and integrated information theory. We describe the key characteristics of each approach by identifying which aspects of consciousness they propose to explain, what their neurobiological commitments are and what empirical data are adduced in their support. We consider how some prominent empirical debates might distinguish among these theories, and we outline three ways in which theories need to be developed to deliver a mature regimen of theory-testing in the neuroscience of consciousness. There are good reasons to think that the iterative development, testing and comparison of theories of consciousness will lead to a deeper understanding of this most profound of mysteries.
Collapse
|
24
|
Caccavale R, Finzi A. A Robotic Cognitive Control Framework for Collaborative Task Execution and Learning. Top Cogn Sci 2021; 14:327-343. [PMID: 34826350 DOI: 10.1111/tops.12587] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/20/2021] [Accepted: 10/21/2021] [Indexed: 11/30/2022]
Abstract
In social and service robotics, complex collaborative tasks are expected to be executed while interacting with humans in a natural and fluent manner. In this scenario, the robotic system is typically provided with structured tasks to be accomplished, but must also continuously adapt to human activities, commands, and interventions. We propose to tackle these issues by exploiting the concept of cognitive control, introduced in cognitive psychology and neuroscience to describe the executive mechanisms needed to support adaptive responses and complex goal-directed behaviors. Specifically, we rely on a supervisory attentional system to orchestrate the execution of hierarchically organized robotic behaviors. This paradigm seems particularly effective not only for flexible plan execution but also for human-robot interaction, because it directly provides attention mechanisms considered as pivotal for implicit, non-verbal human-human communication. Following this approach, we are currently developing a robotic cognitive control framework enabling collaborative task execution and incremental task learning. In this paper, we provide a uniform overview of the framework illustrating its main features and discussing the potential of the supervisory attentional system paradigm in different scenarios where humans and robots have to collaborate for learning and executing everyday activities.
Collapse
Affiliation(s)
- Riccardo Caccavale
- Dipartimento di Ingegneria Elettrica e Tecnologie dell'Informazione (DIETI), Università degli Studi di Napoli "Federico II"
| | - Alberto Finzi
- Dipartimento di Ingegneria Elettrica e Tecnologie dell'Informazione (DIETI), Università degli Studi di Napoli "Federico II"
| |
Collapse
|
25
|
Langdon A, Botvinick M, Nakahara H, Tanaka K, Matsumoto M, Kanai R. Meta-learning, social cognition and consciousness in brains and machines. Neural Netw 2021; 145:80-89. [PMID: 34735893 DOI: 10.1016/j.neunet.2021.10.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 09/20/2021] [Accepted: 10/01/2021] [Indexed: 12/11/2022]
Abstract
The intersection between neuroscience and artificial intelligence (AI) research has created synergistic effects in both fields. While neuroscientific discoveries have inspired the development of AI architectures, new ideas and algorithms from AI research have produced new ways to study brain mechanisms. A well-known example is the case of reinforcement learning (RL), which has stimulated neuroscience research on how animals learn to adjust their behavior to maximize reward. In this review article, we cover recent collaborative work between the two fields in the context of meta-learning and its extension to social cognition and consciousness. Meta-learning refers to the ability to learn how to learn, such as learning to adjust hyperparameters of existing learning algorithms and how to use existing models and knowledge to efficiently solve new tasks. This meta-learning capability is important for making existing AI systems more adaptive and flexible to efficiently solve new tasks. Since this is one of the areas where there is a gap between human performance and current AI systems, successful collaboration should produce new ideas and progress. Starting from the role of RL algorithms in driving neuroscience, we discuss recent developments in deep RL applied to modeling prefrontal cortex functions. Even from a broader perspective, we discuss the similarities and differences between social cognition and meta-learning, and finally conclude with speculations on the potential links between intelligence as endowed by model-based RL and consciousness. For future work we highlight data efficiency, autonomy and intrinsic motivation as key research areas for advancing both fields.
Collapse
Affiliation(s)
- Angela Langdon
- Princeton Neuroscience Institute, Princeton University, USA
| | - Matthew Botvinick
- DeepMind, London, UK; Gatsby Computational Neuroscience Unit, University College London, London, UK
| | | | - Keiji Tanaka
- RIKEN Center for Brain Science, Wako, Saitama, Japan
| | - Masayuki Matsumoto
- Division of Biomedical Science, Faculty of Medicine, University of Tsukuba, Ibaraki, Japan; Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, Japan; Transborder Medical Research Center, University of Tsukuba, Ibaraki, Japan
| | | |
Collapse
|