1
|
Colombatto C, Birch J, Fleming SM. The influence of mental state attributions on trust in large language models. COMMUNICATIONS PSYCHOLOGY 2025; 3:84. [PMID: 40415069 DOI: 10.1038/s44271-025-00262-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2024] [Accepted: 05/05/2025] [Indexed: 05/27/2025]
Abstract
Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for 'experience' (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion and are distinct from attributions of 'intelligence' (e.g., reasoning, planning), and yet may affect trust in AI systems. While past work provides some support for a link between anthropomorphism and trust, the impact of attributions of consciousness and other aspects of mentality on user trust remains unclear. We explored this in a preregistered experiment (N = 410) in which participants rated the capacity of an LLM to exhibit consciousness and a variety of other mental states. They then completed a decision-making task where they could revise their choices based on the advice of an LLM. Bayesian analyses revealed strong evidence against a positive correlation between attributions of consciousness and advice-taking; indeed, a dimension of mental states related to experience showed a negative relationship with advice-taking, while attributions of intelligence were strongly correlated with advice acceptance. These findings highlight how users' attitudes and behaviours are shaped by sophisticated intuitions about the capacities of LLMs-with different aspects of mental state attribution predicting people's trust in these systems.
Collapse
Affiliation(s)
- Clara Colombatto
- Department of Psychology, University of Waterloo, Waterloo, ON, Canada.
| | - Jonathan Birch
- Department of Philosophy, Logic and Scientific Method, and Centre for Philosophy of Natural and Social Science, London School of Economics and Political Science, London, UK
| | - Stephen M Fleming
- Department of Experimental Psychology and Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK
| |
Collapse
|
2
|
Laurenzi M, Raffone A, Gallagher S, Chiarella SG. A multidimensional approach to the self in non-human animals through the Pattern Theory of Self. Front Psychol 2025; 16:1561420. [PMID: 40271366 PMCID: PMC12014599 DOI: 10.3389/fpsyg.2025.1561420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2025] [Accepted: 03/26/2025] [Indexed: 04/25/2025] Open
Abstract
In the last decades, research on animal consciousness has advanced significantly, fueled by interdisciplinary contributions. However, a critical dimension of animal experience remains underexplored: the self. While traditionally linked to human studies, research focused on the self in animals has often been framed dichotomously, distinguishing low-level, bodily, and affective aspects from high-level, cognitive, and conceptual dimensions. Emerging evidence suggests a broader spectrum of self-related features across species, yet current theoretical approaches often reduce the self to a derivative aspect of consciousness or prioritize narrow high-level dimensions, such as self-recognition or metacognition. To address this gap, we propose an integrated framework grounded in the Pattern Theory of Self (PTS). PTS conceptualizes the self as a dynamic, multidimensional construct arising from a matrix of dimensions, ranging from bodily and affective to intersubjective and normative aspects. We propose adopting this multidimensional perspective for the study of the self in animals, by emphasizing the graded nature of the self within each dimension and the non-hierarchical organization across dimensions. In this sense, PTS may accommodate both inter- and intra-species variability, enabling researchers to investigate the self across diverse organisms without relying on anthropocentric biases. We propose that, by integrating this framework with insights from comparative psychology, neuroscience, and ethology, the application of PTS to animals can show how the self emerges in varying degrees and forms, shaped by ecological niches and adaptive demands.
Collapse
Affiliation(s)
- Matteo Laurenzi
- Department of Psychology, Sapienza University of Rome, Rome, Italy
| | - Antonino Raffone
- Department of Psychology, Sapienza University of Rome, Rome, Italy
| | - Shaun Gallagher
- Department of Philosophy, University of Memphis, Memphis, TN, United States
- School of Liberal Arts (SOLA), University of Wollongong, Wollongong, NSW, Australia
| | - Salvatore G. Chiarella
- School of Liberal Arts (SOLA), University of Wollongong, Wollongong, NSW, Australia
- International School for Advanced Studies (SISSA), Trieste, Italy
| |
Collapse
|
3
|
Evers K, Farisco M, Chatila R, Earp BD, Freire IT, Hamker F, Nemeth E, Verschure PFMJ, Khamassi M. Preliminaries to artificial consciousness: A multidimensional heuristic approach. Phys Life Rev 2025; 52:180-193. [PMID: 39787683 DOI: 10.1016/j.plrev.2025.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2025] [Accepted: 01/03/2025] [Indexed: 01/12/2025]
Abstract
The pursuit of artificial consciousness requires conceptual clarity to navigate its theoretical and empirical challenges. This paper introduces a composite, multilevel, and multidimensional model of consciousness as a heuristic framework to guide research in this field. Consciousness is treated as a complex phenomenon, with distinct constituents and dimensions that can be operationalized for study and for evaluating their replication. We argue that this model provides a balanced approach to artificial consciousness research by avoiding binary thinking (e.g., conscious vs. non-conscious) and offering a structured basis for testable hypotheses. To illustrate its utility, we focus on "awareness" as a case study, demonstrating how specific dimensions of consciousness can be pragmatically analyzed and targeted for potential artificial instantiation. By breaking down the conceptual intricacies of consciousness and aligning them with practical research goals, this paper lays the groundwork for a robust strategy to advance the scientific and technical understanding of artificial consciousness.
Collapse
Affiliation(s)
- K Evers
- Centre for Research Ethics and Bioethics, Uppsala University, Uppsala, Sweden
| | - M Farisco
- Centre for Research Ethics and Bioethics, Uppsala University, Uppsala, Sweden; Biogem Molecular Biology and Genetics Research Institute, Ariano Irpino, AV, Italy.
| | - R Chatila
- Institute of Intelligent Systems and Robotics, CNRS, Sorbonne University, Paris, France
| | - B D Earp
- Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK; Centre for Biomedical Ethics, National University of Singapore, Singapore
| | - I T Freire
- Institute of Intelligent Systems and Robotics, CNRS, Sorbonne University, Paris, France
| | - F Hamker
- Artificial Intelligence, Computer Science, Chemnitz University of Technology, Germany
| | - E Nemeth
- Institute of Intelligent Systems and Robotics, CNRS, Sorbonne University, Paris, France
| | - P F M J Verschure
- Alicante Institute of Neuroscience & Department of Health Psychology, Universidad Miguel Hernandez, Spain
| | - M Khamassi
- Institute of Intelligent Systems and Robotics, CNRS, Sorbonne University, Paris, France
| |
Collapse
|
4
|
Miller WB, Baluška F, Reber AS, Slijepčević P. Biological mechanisms contradict AI consciousness: The spaces between the notes. Biosystems 2025; 247:105387. [PMID: 39736318 DOI: 10.1016/j.biosystems.2024.105387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2024] [Revised: 12/27/2024] [Accepted: 12/27/2024] [Indexed: 01/01/2025]
Abstract
The presumption that experiential consciousness requires a nervous system and brain has been central to the debate on the possibility of developing a conscious form of artificial intelligence (AI). The likelihood of future AI consciousness or devising tools to assess its presence has focused on how AI might mimic brain-centered activities. Currently, dual general assumptions prevail: AI consciousness is primarily an issue of functional information density and integration, and no substantive technical barriers exist to prevent its achievement. When the cognitive process that underpins consciousness is stipulated as a cellular attribute, these premises are directly contradicted. The innate characteristics of biological information and how that information is managed by individual cells have no parallels within machine-based AI systems. Any assertion of computer-based AI consciousness represents a fundamental misapprehension of these crucial differences.
Collapse
Affiliation(s)
| | - František Baluška
- Institute of Cellular and Molecular Botany, University of Bonn, Germany.
| | - Arthur S Reber
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada.
| | - Predrag Slijepčević
- Department of Life Sciences, College of Health, Medicine and Life Sciences, University of Brunel, UK.
| |
Collapse
|
5
|
Farisco M, Evers K, Changeux JP. Is artificial consciousness achievable? Lessons from the human brain. Neural Netw 2024; 180:106714. [PMID: 39270349 DOI: 10.1016/j.neunet.2024.106714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 07/29/2024] [Accepted: 09/06/2024] [Indexed: 09/15/2024]
Abstract
We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word "consciousness" for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.
Collapse
Affiliation(s)
- Michele Farisco
- Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden; Biogem, Biology and Molecular Genetics Institute, Ariano Irpino (AV), Italy.
| | - Kathinka Evers
- Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| | | |
Collapse
|
6
|
Di Paolo LD, White B, Guénin-Carlut A, Constant A, Clark A. Active inference goes to school: the importance of active learning in the age of large language models. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230148. [PMID: 39155715 PMCID: PMC11391319 DOI: 10.1098/rstb.2023.0148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 12/16/2023] [Accepted: 01/23/2024] [Indexed: 08/20/2024] Open
Abstract
Human learning essentially involves embodied interactions with the material world. But our worlds now include increasing numbers of powerful and (apparently) disembodied generative artificial intelligence (AI). In what follows we ask how best to understand these new (somewhat 'alien', because of their disembodied nature) resources and how to incorporate them in our educational practices. We focus on methodologies that encourage exploration and embodied interactions with 'prepared' material environments, such as the carefully organized settings of Montessori education. Using the active inference framework, we approach our questions by thinking about human learning as epistemic foraging and prediction error minimization. We end by arguing that generative AI should figure naturally as new elements in prepared learning environments by facilitating sequences of precise prediction error enabling trajectories of self-correction. In these ways, we anticipate new synergies between (apparently) disembodied and (essentially) embodied forms of intelligence. This article is part of the theme issue 'Minds in movement: embodied cognition in the age of artificial intelligence'.
Collapse
Affiliation(s)
- Laura Desirèe Di Paolo
- Department of Engineering and Informatics, The University of Sussex, Brighton, UK
- School of Psychology, Children & Technology Lab, The University of Sussex, Falmer (Brighton), UK
| | - Ben White
- Department of Philosophy, The University of Sussex, Sussex, UK
| | - Avel Guénin-Carlut
- Department of Engineering and Informatics, The University of Sussex, Brighton, UK
| | - Axel Constant
- Department of Engineering and Informatics, The University of Sussex, Brighton, UK
| | - Andy Clark
- Department of Engineering and Informatics, The University of Sussex, Brighton, UK
- Department of Philosophy, The University of Sussex, Sussex, UK
- Department of Philosophy, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
7
|
Ben-Ami Bartal I. The complex affective and cognitive capacities of rats. Science 2024; 385:1298-1305. [PMID: 39298607 DOI: 10.1126/science.adq6217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 08/19/2024] [Indexed: 09/22/2024]
Abstract
For several decades, although studies of rat physiology and behavior have abounded, research on rat emotions has been limited in scope to fear, anxiety, and pain. Converging evidence for the capacity of many species to share others' affective states has emerged, sparking interest in the empathic capacities of rats. Recent research has demonstrated that rats are a highly cooperative species and are motivated by others' distress to prosocial actions, such as opening a door or pulling a chain to release trapped conspecifics. Studies of rat affect, cognition, and neural function provide compelling evidence that rats have some capacity to represent others' needs, to instrumentally act to improve their well-being, and are thus capable of forms of targeted helping. Rats' complex abilities raise the importance of integrating new measures of rat well-being into scientific research.
Collapse
Affiliation(s)
- Inbal Ben-Ami Bartal
- School of School of Psychological Sciences, Tel-Aviv University, Tel Aviv, 6997801, Israel
- Sagol School of Neuroscience, Tel-Aviv University, Tel Aviv, 6997801, Israel
| |
Collapse
|
8
|
Wiese W. Artificial consciousness: a perspective from the free energy principle. PHILOSOPHICAL STUDIES 2024; 181:1947-1970. [DOI: 10.1007/s11098-024-02182-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/12/2024] [Indexed: 01/06/2025]
Abstract
AbstractDoes the assumption of a weak form of computational functionalism, according to which the right form of neural computation is sufficient for consciousness, entail that a digital computational simulation of such neural computations is conscious? Or must this computational simulation be implemented in the right way, in order to replicate consciousness?From the perspective of Karl Friston’s free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not instantiated by computers with a classical (von Neumann) architecture. I argue that at least one of these properties, viz. a certain kind of causal flow, can be used to draw a distinction between systems that merely simulate, and those that actually replicate consciousness.
Collapse
|
9
|
Colombatto C, Fleming SM. Folk psychological attributions of consciousness to large language models. Neurosci Conscious 2024; 2024:niae013. [PMID: 38618488 PMCID: PMC11008499 DOI: 10.1093/nc/niae013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 02/02/2024] [Accepted: 03/12/2024] [Indexed: 04/16/2024] Open
Abstract
Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of large language models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here, we consider the question of whether AI could have subjective experiences such as feelings and sensations ('phenomenal consciousness'). While experts from many fields have weighed in on this issue in academic and public discourse, it remains unknown whether and how the general population attributes phenomenal consciousness to AI. We surveyed a sample of US residents (n = 300) and found that a majority of participants were willing to attribute some possibility of phenomenal consciousness to large language models. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenality-but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions-with potential implications for the legal and ethical status of AI.
Collapse
Affiliation(s)
- Clara Colombatto
- Department of Experimental Psychology, University College London, 26 Bedford Way, London WC1H 0AP, United Kingdom
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo ON N2L 3G1, Canada
| | - Stephen M Fleming
- Department of Experimental Psychology, University College London, 26 Bedford Way, London WC1H 0AP, United Kingdom
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, 10-12 Russell Square, London WC1B 5EH, United Kingdom
- Wellcome Centre for Human Neuroimaging, University College London, 12 Queen Square, London WC1N 3AR, United Kingdom
| |
Collapse
|