1
|
Schneider KA. An entropy model of decision uncertainty. JOURNAL OF MATHEMATICAL PSYCHOLOGY 2025; 125:102919. [PMID: 40161011 PMCID: PMC11951473 DOI: 10.1016/j.jmp.2025.102919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Studying metacognition, the introspection of one's own decisions, can provide insights into the mechanisms underlying the decision. Here we show that observers' uncertainty about their decisions incorporate both the entropy of the stimuli and the entropy of their response probability across the psychometric function. Being able to describe uncertainty data with a functional form permits the measurement of internal parameters not measurable from the decision responses alone. To test and demonstrate the utility of this novel model, we measured uncertainty in 11 participants as they judged the relative contrast appearance of two stimuli in several experiments employing implicit bias or attentional cues. The entropy model enabled an otherwise intractable quantitative analysis of participants' uncertainty, which in one case distinguished two comparative judgments that produced nearly identical psychometric functions. In contrast, comparative and equality judgments with different behavioral reports, yielded uncertainty reports that were not significantly different. The entropy model was able to successfully account for uncertainty in these two different types of decisions that resulted in differently shaped psychometric functions, and the entropy contribution from the stimuli, which were identical across experiments, was consistent. An observer's uncertainty could therefore be measured the total entropy of the inputs and outputs of the stimulus-response system, i.e. the entropy of the stimuli plus the entropy of the observer's responses.
Collapse
Affiliation(s)
- Keith A. Schneider
- Department of Biology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychological & Brain Sciences, University of Delaware, Newark, DE 19716
| |
Collapse
|
2
|
Mamassian P, de Gardelle V. The confidence-noise confidence-boost (CNCB) model of confidence rating data. PLoS Comput Biol 2025; 21:e1012451. [PMID: 40258078 PMCID: PMC12043244 DOI: 10.1371/journal.pcbi.1012451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 04/30/2025] [Accepted: 03/19/2025] [Indexed: 04/23/2025] Open
Abstract
Over the last decade, different approaches have been proposed to interpret confidence rating judgments obtained after perceptual decisions. One very popular approach is to compute meta-d' which is a global measure of the sensibility to discriminate the confidence rating distributions for correct and incorrect perceptual decisions. Here, we propose a generative model of confidence based on two main parameters, confidence noise and confidence boost, that we call CNCB model. Confidence noise impairs confidence judgements above and beyond how sensory noise affects perceptual sensitivity. The confidence boost parameter reflects whether confidence uses the same information that was used for perceptual decisions, or some new information. This CNCB model offers a principled way to estimate a confidence efficiency measure that is a theory-driven alternative to the popular M-ratio. We then describe two scenarios to estimate the confidence boost parameter, one where the experiment uses more than two confidence levels, the other where the experiment uses more than two stimulus strengths. We also extend the model to experiments using continuous confidence ratings and describe how the model can be fitted without binning these ratings. The continuous confidence model includes a non-linear mapping between objective and subjective confidence probabilities that can be estimated. Altogether, the CNCB model should help interpret confidence rating data at a deeper level. This manuscript is accompanied by a toolbox that will allow researchers to estimate all the parameters of the CNCB model in confidence ratings datasets. Some examples of re-analyses of previous datasets are provided in S1 File.
Collapse
Affiliation(s)
- Pascal Mamassian
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS, Paris, France
| | | |
Collapse
|
3
|
Nagamura H, Onishi H, Kobayasi KI, Yuki S. Implicit manifestation of prospective metacognition in betting choices enhances its efficiency compared to explicit expression. Front Hum Neurosci 2025; 19:1490530. [PMID: 40110534 PMCID: PMC11920126 DOI: 10.3389/fnhum.2025.1490530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2024] [Accepted: 02/18/2025] [Indexed: 03/22/2025] Open
Abstract
Recent metacognitive research has extensively investigated metacognitive efficiency (i.e., the accuracy of metacognition). Given the functional importance of metacognition for adaptive behavioral control, it is important to explore the nature of prospective metacognitive efficiency; however, most research has focused on retrospective metacognition. To understand the nature of prospective metacognition, it is essential to identify the factors that influence its efficiency. Despite its significance, research exploring the factors of prospective metacognitive efficiency remains scarce. We focused on the relationship between the efficiency of prospective metacognition and the manner in which metacognition is inferred. Specifically, we explored whether explicit metacognition based on verbal confidence reports and implicit metacognition based on bets produce differences in efficiency. Participants were instructed to either respond to a memory belief with a sound (explicit metacognition) or make a bet on its recallability (implicit metacognition) during a delayed match-to-sample task. The task was identical for all participants, except for the pre-rating instructions. We found that the efficiency of prospective metacognition was enhanced by the betting instructions. Additionally, we showed the possibility that this difference in metacognitive efficiency was caused by the difference in pre-rating variability between the instructions. Our results suggest that the way a person evaluates their own internal states makes the difference in the efficiency of prospective metacognition. This study is the first to identify a factor that regulates the efficiency of prospective metacognition, thereby advancing our understanding of the mechanisms underlying metacognition. These findings highlight that the potential influence of framing, such as instruction, can improve metacognitive efficiency.
Collapse
Affiliation(s)
- Hidekazu Nagamura
- Graduate School of Life and Medical Sciences, Doshisha University, Kyoto, Japan
| | - Hiroshi Onishi
- Graduate School of Life and Medical Sciences, Doshisha University, Kyoto, Japan
| | - Kohta I Kobayasi
- Graduate School of Life and Medical Sciences, Doshisha University, Kyoto, Japan
| | - Shoko Yuki
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
4
|
Peters MAK. Introspective psychophysics for the study of subjective experience. Cereb Cortex 2025; 35:49-57. [PMID: 39569467 DOI: 10.1093/cercor/bhae455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 11/01/2024] [Accepted: 11/04/2024] [Indexed: 11/22/2024] Open
Abstract
Studying subjective experience is hard. We believe that pain is not identical to nociception, nor pleasure a computational reward signal, nor fear the activation of "threat circuitry". Unfortunately, introspective self-reports offer our best bet for accessing subjective experience, but many still believe that introspection is "unreliable" and "unverifiable". But which of introspection's faults do we find most damning? Is it that introspection provides imperfect access to brain processes (e.g. perception, memory)? That subjective experience is not objectively verifiable? That it is hard to isolate from non-subjective processing capacity? Here, I argue none of these prevents us from building a meaningful, impactful psychophysical research program that treats subjective experience as a valid empirical target through precisely characterizing relationships among environmental variables, brain processes and behavior, and self-reported phenomenology. Following recent similar calls by Peters (Towards characterizing the canonical computations generating phenomenal experience. 2022. Neurosci Biobehav Rev: 142, 104903), Kammerer and Frankish (What forms could introspective systems take? A research programme. 2023. J Conscious Stud 30:13-48), and Fleming (Metacognitive psychophysics in humans, animals, and AI. 2023. J Conscious Stud 30:113-128), "introspective psychophysics" thus treats introspection's apparent faults as features, not bugs-just as the noise and distortions linking environment to behavior inspired Fechner's psychophysics over 150 years ago. This next generation of psychophysics will establish a powerful tool for building and testing precise explanatory models of phenomenology across many dimensions-urgency, emotion, clarity, vividness, confidence, and more.
Collapse
Affiliation(s)
- Megan A K Peters
- Department of Cognitive Sciences, University of California Irvine, Social & Behavioral Sciences Gateway Building, Irvine, CA 92697, United States
- Department of Logic and Philosophy of Science, University of California Irvine, Social & Behavioral Sciences Gateway Building, Irvine, CA 92697, United States
- Center for Theoretical Behavioral Sciences, University of California Irvine, Social & Behavioral Sciences Gateway Building, Irvine, CA 92697, United States
- Center for the Neurobiology of Learning and Memory, University of California Irvine, Qureshey Research Laboratory, Irvine, CA 92697, United States
- Brain, Mind, and Consciousness Program, Canadian Institute for Advanced Research, MaRS Centre, West Tower661 University Ave., Suite 505, Toronto, Ontario M5G 1M1, Canada
| |
Collapse
|
5
|
Le Denmat P, Verguts T, Desender K. A low-dimensional approximation of optimal confidence. PLoS Comput Biol 2024; 20:e1012273. [PMID: 39047032 PMCID: PMC11299811 DOI: 10.1371/journal.pcbi.1012273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/05/2024] [Accepted: 06/24/2024] [Indexed: 07/27/2024] Open
Abstract
Human decision making is accompanied by a sense of confidence. According to Bayesian decision theory, confidence reflects the learned probability of making a correct response, given available data (e.g., accumulated stimulus evidence and response time). Although optimal, independently learning these probabilities for all possible data combinations is computationally intractable. Here, we describe a novel model of confidence implementing a low-dimensional approximation of this optimal yet intractable solution. This model allows efficient estimation of confidence, while at the same time accounting for idiosyncrasies, different kinds of biases and deviation from the optimal probability correct. Our model dissociates confidence biases resulting from the estimate of the reliability of evidence by individuals (captured by parameter α), from confidence biases resulting from general stimulus independent under and overconfidence (captured by parameter β). We provide empirical evidence that this model accurately fits both choice data (accuracy, response time) and trial-by-trial confidence ratings simultaneously. Finally, we test and empirically validate two novel predictions of the model, namely that 1) changes in confidence can be independent of performance and 2) selectively manipulating each parameter of our model leads to distinct patterns of confidence judgments. As a tractable and flexible account of the computation of confidence, our model offers a clear framework to interpret and further resolve different forms of confidence biases.
Collapse
Affiliation(s)
| | - Tom Verguts
- Department of Experimental Psychology, Ghent University, Ghent Belgium
| | | |
Collapse
|
6
|
Shekhar M, Rahnev D. How do humans give confidence? A comprehensive comparison of process models of perceptual metacognition. J Exp Psychol Gen 2024; 153:656-688. [PMID: 38095983 PMCID: PMC10922729 DOI: 10.1037/xge0001524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2024]
Abstract
Humans have the metacognitive ability to assess the accuracy of their decisions via confidence judgments. Several computational models of confidence have been developed but not enough has been done to compare these models, making it difficult to adjudicate between them. Here, we compare 14 popular models of confidence that make various assumptions, such as confidence being derived from postdecisional evidence, from positive (decision-congruent) evidence, from posterior probability computations, or from a separate decision-making system for metacognitive judgments. We fit all models to three large experiments in which subjects completed a basic perceptual task with confidence ratings. In Experiments 1 and 2, the best-fitting model was the lognormal meta noise (LogN) model, which postulates that confidence is selectively corrupted by signal-dependent noise. However, in Experiment 3, the positive evidence (PE) model provided the best fits. We evaluated a new model combining the two consistently best-performing models-LogN and the weighted evidence and visibility (WEV). The resulting model, which we call logWEV, outperformed its individual counterparts and the PE model across all data sets, offering a better, more generalizable explanation for these data. Parameter and model recovery analyses showed mostly good recoverability but with important exceptions carrying implications for our ability to discriminate between models. Finally, we evaluated each model's ability to explain different patterns in the data, which led to additional insight into their performances. These results comprehensively characterize the relative adequacy of current confidence models to fit data from basic perceptual tasks and highlight the most plausible mechanisms underlying confidence generation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Medha Shekhar
- School of Psychology, Georgia Institute of Technology
| | | |
Collapse
|
7
|
Shekhar M, Rahnev D. Human-like dissociations between confidence and accuracy in convolutional neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.01.578187. [PMID: 38352596 PMCID: PMC10862905 DOI: 10.1101/2024.02.01.578187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/25/2024]
Abstract
Prior research has shown that manipulating stimulus energy by changing both stimulus contrast and variability results in confidence-accuracy dissociations in humans. Specifically, even when performance is matched, higher stimulus energy leads to higher confidence. The most common explanation for this effect is the positive evidence heuristic where confidence neglects evidence that disconfirms the choice. However, an alternative explanation is the signal-and-variance-increase hypothesis, according to which these dissociations arise from low-level changes in the separation and variance of perceptual representations. Because artificial neural networks lack built-in confidence heuristics, they can serve as a test for the necessity of confidence heuristics in explaining confidence-accuracy dissociations. Therefore, we tested whether confidence-accuracy dissociations induced by stimulus energy manipulations emerge naturally in convolutional neural networks (CNNs). We found that, across three different energy manipulations, CNNs produced confidence-accuracy dissociations similar to those found in humans. This effect was present for a range of CNN architectures from shallow 4-layer networks to very deep ones, such as VGG-19 and ResNet -50 pretrained on ImageNet. Further, we traced back the reason for the confidence-accuracy dissociations in all CNNs to the same signal-and-variance increase that has been proposed for humans: higher stimulus energy increased the separation and variance of the CNNs' internal representations leading to higher confidence even for matched accuracy. These findings cast doubt on the necessity of the positive evidence heuristic to explain human confidence and establish CNNs as promising models for adjudicating between low-level, stimulus-driven and high-level, cognitive explanations of human behavior.
Collapse
Affiliation(s)
- Medha Shekhar
- School of Psychology, Georgia Institute of Technology, Atlanta, GA
| | - Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
8
|
Katyal S, Fleming SM. The future of metacognition research: Balancing construct breadth with measurement rigor. Cortex 2024; 171:223-234. [PMID: 38041921 PMCID: PMC11139654 DOI: 10.1016/j.cortex.2023.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 10/20/2023] [Accepted: 11/02/2023] [Indexed: 12/04/2023]
Abstract
Foundational work in the psychology of metacognition identified a distinction between metacognitive knowledge (stable beliefs about one's capacities) and metacognitive experiences (local evaluations of performance). More recently, the field has focused on developing tasks and metrics that seek to identify metacognitive capacities from momentary estimates of confidence in performance, and providing precise computational accounts of metacognitive failure. However, this notable progress in formalising models of metacognitive judgments may come at a cost of ignoring broader elements of the psychology of metacognition - such as how stable meta-knowledge is formed, how social cognition and metacognition interact, and how we evaluate affective states that do not have an obvious ground truth. We propose that construct breadth in metacognition research can be restored while maintaining rigour in measurement, and highlight promising avenues for expanding the scope of metacognition research. Such a research programme is well placed to recapture qualitative features of metacognitive knowledge and experience while maintaining the psychophysical rigor that characterises modern research on confidence and performance monitoring.
Collapse
Affiliation(s)
- Sucharit Katyal
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK; Wellcome Centre for Human Neuroimaging, University College London, London, UK.
| | - Stephen M Fleming
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK; Wellcome Centre for Human Neuroimaging, University College London, London, UK; Department of Experimental Psychology, University College London, London, UK.
| |
Collapse
|
9
|
Sakamoto Y, Miyoshi K. A confidence framing effect: Flexible use of evidence in metacognitive monitoring. Conscious Cogn 2024; 118:103636. [PMID: 38244396 DOI: 10.1016/j.concog.2024.103636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 12/20/2023] [Accepted: 01/05/2024] [Indexed: 01/22/2024]
Abstract
Human behavior is flexibly regulated by specific goals of cognitive tasks. One notable example is goal-directed modulation of metacognitive behavior, where logically equivalent decision-making problems can yield different patterns of introspective confidence depending on the frame in which they are presented. While this observation highlights the important heuristic nature of metacognitive monitoring, computational mechanisms underlying this phenomenon remain elusive. We confirmed the confidence framing effect in two-alternative dot-number discrimination and in previously published preference-choice data, demonstrating distinctive confidence patterns between "choose more" or "choose less" frames. Formal model comparisons revealed a simple confidence heuristic behind this phenomenon, which assigns greater weight to chosen than unchosen stimulus evidence. This computation appears to be based on internal evidence constituted under specific task demands rather than physical stimulus intensity itself, a view justified in terms of ecological rationality. These results shed light on the adaptive nature of human decision-making and metacognitive monitoring.
Collapse
Affiliation(s)
- Yosuke Sakamoto
- Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Kiyofumi Miyoshi
- Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo, Kyoto 606-8501, Japan.
| |
Collapse
|
10
|
Abstract
Determining the psychological, computational, and neural bases of confidence and uncertainty holds promise for understanding foundational aspects of human metacognition. While a neuroscience of confidence has focused on the mechanisms underpinning subpersonal phenomena such as representations of uncertainty in the visual or motor system, metacognition research has been concerned with personal-level beliefs and knowledge about self-performance. I provide a road map for bridging this divide by focusing on a particular class of confidence computation: propositional confidence in one's own (hypothetical) decisions or actions. Propositional confidence is informed by the observer's models of the world and their cognitive system, which may be more or less accurate-thus explaining why metacognitive judgments are inferential and sometimes diverge from task performance. Disparate findings on the neural basis of uncertainty and performance monitoring are integrated into a common framework, and a new understanding of the locus of action of metacognitive interventions is developed.
Collapse
Affiliation(s)
- Stephen M Fleming
- Department of Experimental Psychology, Wellcome Centre for Human Neuroimaging, and Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom;
| |
Collapse
|
11
|
Olawole-Scott H, Yon D. Expectations about precision bias metacognition and awareness. J Exp Psychol Gen 2023; 152:2177-2189. [PMID: 36972098 PMCID: PMC10399087 DOI: 10.1037/xge0001371] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 11/24/2022] [Accepted: 12/26/2022] [Indexed: 03/29/2023]
Abstract
Bayesian models of the mind suggest that we estimate the reliability or "precision" of incoming sensory signals to guide perceptual inference and to construct feelings of confidence or uncertainty about what we are perceiving. However, accurately estimating precision is likely to be challenging for bounded systems like the brain. One way observers could overcome this challenge is to form expectations about the precision of their perceptions and use these to guide metacognition and awareness. Here we test this possibility. Participants made perceptual decisions about visual motion stimuli, while providing confidence ratings (Experiments 1 and 2) or ratings of subjective visibility (Experiment 3). In each experiment, participants acquired probabilistic expectations about the likely strength of upcoming signals. We found these expectations about precision altered metacognition and awareness-with participants feeling more confident and stimuli appearing more vivid when stronger sensory signals were expected, without concomitant changes in objective perceptual performance. Computational modeling revealed that this effect could be well explained by a predictive learning model that infers the precision (strength) of current signals as a weighted combination of incoming evidence and top-down expectation. These results support an influential but untested tenet of Bayesian models of cognition, suggesting that agents do not only "read out" the reliability of information arriving at their senses, but also take into account prior knowledge about how reliable or "precise" different sources of information are likely to be. This reveals that expectations about precision influence how the sensory world appears and how much we trust our senses. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Daniel Yon
- Department of Psychological Sciences, Birkbeck, University of London
| |
Collapse
|
12
|
Dayan P. Metacognitive Information Theory. Open Mind (Camb) 2023; 7:392-411. [PMID: 37637303 PMCID: PMC10449404 DOI: 10.1162/opmi_a_00091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 06/25/2023] [Indexed: 08/29/2023] Open
Abstract
The capacity that subjects have to rate confidence in their choices is a form of metacognition, and can be assessed according to bias, sensitivity and efficiency. Rich networks of domain-specific and domain-general regions of the brain are involved in the rating, and are associated with its quality and its use for regulating the processes of thinking and acting. Sensitivity and efficiency are often measured by quantities called meta-d' and the M-ratio that are based on reverse engineering the potential accuracy of the original, primary, choice that is implied by the quality of the confidence judgements. Here, we advocate a straightforward measure of sensitivity, called meta-𝓘, which assesses the mutual information between the accuracy of the subject's choices and the confidence reports, and two normalized versions of this measure that quantify efficiency in different regimes. Unlike most other measures, meta-𝓘-based quantities increase with the number of correctly assessed bins with which confidence is reported. We illustrate meta-𝓘 on data from a perceptual decision-making task, and via a simple form of simulated second-order metacognitive observer.
Collapse
Affiliation(s)
- Peter Dayan
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- University of Tübingen, Tübingen, Germany
| |
Collapse
|