1
|
DeYoung CG, Hilger K, Hanson JL, Abend R, Allen TA, Beaty RE, Blain SD, Chavez RS, Engel SA, Feilong M, Fornito A, Genç E, Goghari V, Grazioplene RG, Homan P, Joyner K, Kaczkurkin AN, Latzman RD, Martin EA, Nikolaidis A, Pickering AD, Safron A, Sassenberg TA, Servaas MN, Smillie LD, Spreng RN, Viding E, Wacker J. Beyond Increasing Sample Sizes: Optimizing Effect Sizes in Neuroimaging Research on Individual Differences. J Cogn Neurosci 2025; 37:1023-1034. [PMID: 39792657 DOI: 10.1162/jocn_a_02297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Abstract
Linking neurobiology to relatively stable individual differences in cognition, emotion, motivation, and behavior can require large sample sizes to yield replicable results. Given the nature of between-person research, sample sizes at least in the hundreds are likely to be necessary in most neuroimaging studies of individual differences, regardless of whether they are investigating the whole brain or more focal hypotheses. However, the appropriate sample size depends on the expected effect size. Therefore, we propose four strategies to increase effect sizes in neuroimaging research, which may help to enable the detection of replicable between-person effects in samples in the hundreds rather than the thousands: (1) theoretical matching between neuroimaging tasks and behavioral constructs of interest; (2) increasing the reliability of both neural and psychological measurement; (3) individualization of measures for each participant; and (4) using multivariate approaches with cross-validation instead of univariate approaches. We discuss challenges associated with these methods and highlight strategies for improvements that will help the field to move toward a more robust and accessible neuroscience of individual differences.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Adam Safron
- Johns Hopkins University School of Medicine, Baltimore, MD
| | | | | | | | | | | | | |
Collapse
|
2
|
Katahira K, Oba T, Toyama A. Does the reliability of computational models truly improve with hierarchical modeling? Some recommendations and considerations for the assessment of model parameter reliability : Reliability of computational model parameters. Psychon Bull Rev 2024; 31:2465-2486. [PMID: 38717680 PMCID: PMC11680638 DOI: 10.3758/s13423-024-02490-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/04/2024] [Indexed: 12/29/2024]
Abstract
Computational modeling of behavior is increasingly being adopted as a standard methodology in psychology, cognitive neuroscience, and computational psychiatry. This approach involves estimating parameters in a computational (or cognitive) model that represents the computational processes of the underlying behavior. In this approach, the reliability of the parameter estimates is an important issue. The use of hierarchical (Bayesian) approaches, which place a prior on each model parameter of the individual participants, is thought to improve the reliability of the parameters. However, the characteristics of reliability in parameter estimates, especially when individual-level priors are assumed, as in hierarchical models, have not yet been fully discussed. Furthermore, the suitability of different reliability measures for assessing parameter reliability is not thoroughly understood. In this study, we conduct a systematic examination of these issues through theoretical analysis and numerical simulations, focusing specifically on reinforcement learning models. We note that the heterogeneity in the estimation precision of individual parameters, particularly with priors, can skew reliability measures toward individuals with higher precision. We further note that there are two factors that reduce reliability, namely estimation error and intersession variation in the true parameters, and we discuss how to evaluate these factors separately. Based on the considerations of this study, we present several recommendations and cautions for assessing the reliability of the model parameters.
Collapse
Affiliation(s)
- Kentaro Katahira
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Central 6, 1-1-1 Higashi, Tsukuba, 305-8566, Ibaraki, Japan.
| | - Takeyuki Oba
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Central 6, 1-1-1 Higashi, Tsukuba, 305-8566, Ibaraki, Japan
- Department of Cognitive and Psychological Sciences, Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | - Asako Toyama
- Japan Society for the Promotion of Science, Tokyo, Japan
- Graduate School of the Humanities, Senshu University, Kawasaki, Japan
- Graduate School of Social Data Science, Hitotsubashi University, Tokyo, Japan
| |
Collapse
|
3
|
Markovitch B, Evans NJ, Birk MV. The value of error-correcting responses for cognitive assessment in games. Sci Rep 2024; 14:20657. [PMID: 39232080 PMCID: PMC11374785 DOI: 10.1038/s41598-024-71762-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 08/30/2024] [Indexed: 09/06/2024] Open
Abstract
Traditional conflict-based cognitive assessment tools are highly behaviorally restrictive, which prevents them from capturing the dynamic nature of human cognition, such as the tendency to make error-correcting responses. The cognitive game Tunnel Runner measures interference control, response inhibition, and response-rule switching in a less restrictive manner than traditional cognitive assessment tools by giving players movement control after an initial response and encouraging error-correcting responses. Nevertheless, error-correcting responses remain unused due to a limited understanding of what they measure and how to use them. To facilitate the use of error-correcting responses to measure and understand human cognition, we developed theoretically-grounded measures of error-correcting responses in Tunnel Runner and assessed whether they reflected the same cognitive functions measured via initial responses. Furthermore, we evaluated the measurement potential of error-correcting responses. We found that initial and error-correcting responses similarly reflected players' response inhibition and interference control, but not their response-rule switching. Furthermore, combining the two response types increased the reliability of interference control and response inhibition measurements. Lastly, error-correcting responses showed the potential to measure response inhibition on their own. Our results pave the way toward understanding and using post-decision change of mind data for cognitive measurement and other research and application contexts.
Collapse
Affiliation(s)
- Benny Markovitch
- Human Technology Interaction, Eindhoven University of Technology, 5612, Eindhoven, AZ, The Netherlands.
| | - Nathan J Evans
- Department of Psychology, Ludwig Maximilian University of Munich, 80799, Munich, Germany
- School of Psychology, University of Queensland, St Lucia, 4067, Australia
| | - Max V Birk
- Human Technology Interaction, Eindhoven University of Technology, 5612, Eindhoven, AZ, The Netherlands
| |
Collapse
|
4
|
Haynes JM, Haines N, Sullivan-Toole H, Olino TM. Test-retest reliability of the play-or-pass version of the Iowa Gambling Task. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024; 24:740-754. [PMID: 38849641 PMCID: PMC11636993 DOI: 10.3758/s13415-024-01197-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/10/2024] [Indexed: 06/09/2024]
Abstract
The Iowa Gambling Task (IGT) is used to assess decision-making in clinical populations. The original IGT does not disambiguate reward and punishment learning; however, an adaptation of the task, the "play-or-pass" IGT, was developed to better distinguish between reward and punishment learning. We evaluated the test-retest reliability of measures of reward and punishment learning from the play-or-pass IGT and examined associations with self-reported measures of reward/punishment sensitivity and internalizing symptoms. Participants completed the task across two sessions, and we calculated mean-level differences and rank-order stability of behavioral measures across the two sessions using traditional scoring, involving session-wide choice proportions, and computational modeling, involving estimates of different aspects of trial-level learning. Measures using both approaches were reliable; however, computational modeling provided more insights regarding between-session changes in performance, and how performance related to self-reported measures of reward/punishment sensitivity and internalizing symptoms. Our results show promise in using the play-or-pass IGT to assess decision-making; however, further work is still necessary to validate the play-or-pass IGT.
Collapse
Affiliation(s)
- Jeremy M Haynes
- Department of Psychology and Neuroscience, Temple University, 1701 N. 13th Street, Philadelphia, PA, 19122, USA.
| | | | - Holly Sullivan-Toole
- Department of Psychology and Neuroscience, Temple University, 1701 N. 13th Street, Philadelphia, PA, 19122, USA
| | - Thomas M Olino
- Department of Psychology and Neuroscience, Temple University, 1701 N. 13th Street, Philadelphia, PA, 19122, USA
| |
Collapse
|
5
|
Pedraza F, Farkas BC, Vékony T, Haesebaert F, Phelipon R, Mihalecz I, Janacsek K, Anders R, Tillmann B, Plancher G, Németh D. Evidence for a competitive relationship between executive functions and statistical learning. NPJ SCIENCE OF LEARNING 2024; 9:30. [PMID: 38609413 PMCID: PMC11014972 DOI: 10.1038/s41539-024-00243-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Accepted: 04/03/2024] [Indexed: 04/14/2024]
Abstract
The ability of the brain to extract patterns from the environment and predict future events, known as statistical learning, has been proposed to interact in a competitive manner with prefrontal lobe-related networks and their characteristic cognitive or executive functions. However, it remains unclear whether these cognitive functions also possess a competitive relationship with implicit statistical learning across individuals and at the level of latent executive function components. In order to address this currently unknown aspect, we investigated, in two independent experiments (NStudy1 = 186, NStudy2 = 157), the relationship between implicit statistical learning, measured by the Alternating Serial Reaction Time task, and executive functions, measured by multiple neuropsychological tests. In both studies, a modest, but consistent negative correlation between implicit statistical learning and most executive function measures was observed. Factor analysis further revealed that a factor representing verbal fluency and complex working memory seemed to drive these negative correlations. Thus, the antagonistic relationship between implicit statistical learning and executive functions might specifically be mediated by the updating component of executive functions or/and long-term memory access.
Collapse
Affiliation(s)
- Felipe Pedraza
- Laboratoire d'Étude des Mécanismes Cognitifs, Université Lumière Lyon 2, Bron, France
- Centre de Recherche en Neurosciences de Lyon, INSERM, CNRS, Université Claude Bernard Lyon 1, CRNL U1028 UMR5292, 95 Boulevard Pinel, F-69500, Bron, France
| | - Bence C Farkas
- Institut du Psychotraumatisme de l'Enfant et de l'Adolescent, Conseil Départemental Yvelines et Hauts-de-Seine et Centre Hospitalier des Versailles, 78000, Versailles, France
- UVSQ, Inserm, Centre de Recherche en Epidémiologie et Santé des Populations, Université Paris-Saclay, 78000, Versailles, France
- LNC2, Département d'études Cognitives, École Normale Supérieure, INSERM, PSL Research University, 75005, Paris, France
| | - Teodóra Vékony
- Centre de Recherche en Neurosciences de Lyon, INSERM, CNRS, Université Claude Bernard Lyon 1, CRNL U1028 UMR5292, 95 Boulevard Pinel, F-69500, Bron, France.
- Department of Education and Psychology, Faculty of Social Sciences, University of Atlántico Medio, Las Palmas de Gran Canaria, Spain.
| | - Frederic Haesebaert
- Centre de Recherche en Neurosciences de Lyon, INSERM, CNRS, Université Claude Bernard Lyon 1, CRNL U1028 UMR5292, 95 Boulevard Pinel, F-69500, Bron, France
| | - Romane Phelipon
- Centre de Recherche en Neurosciences de Lyon, INSERM, CNRS, Université Claude Bernard Lyon 1, CRNL U1028 UMR5292, 95 Boulevard Pinel, F-69500, Bron, France
| | - Imola Mihalecz
- Centre de Recherche en Neurosciences de Lyon, INSERM, CNRS, Université Claude Bernard Lyon 1, CRNL U1028 UMR5292, 95 Boulevard Pinel, F-69500, Bron, France
| | - Karolina Janacsek
- Centre for Thinking and Learning, Institute for Lifecourse Development, School of Human Sciences, Faculty of Education, Health and Human Sciences, University of Greenwich, Old Royal Naval College, Park Row, 150 Dreadnought, London, SE10 9LS, UK
- Institute of Psychology, ELTE Eötvös Loránd University, Kazinczy u. 23-27, H-1075, Budapest, Hungary
| | - Royce Anders
- EPSYLON Laboratory, Department of Psychology, University Paul Valéry Montpellier 3, F34000, Montpellier, France
| | - Barbara Tillmann
- Centre de Recherche en Neurosciences de Lyon, INSERM, CNRS, Université Claude Bernard Lyon 1, CRNL U1028 UMR5292, 95 Boulevard Pinel, F-69500, Bron, France
- Laboratory for Research on Learning and Development, LEAD - CNRS UMR5022, Université de Bourgogne, Dijon, France
| | - Gaën Plancher
- Laboratoire d'Étude des Mécanismes Cognitifs, Université Lumière Lyon 2, Bron, France
- Institut Universitaire de France (IUF), Paris, France
| | - Dezső Németh
- Centre de Recherche en Neurosciences de Lyon, INSERM, CNRS, Université Claude Bernard Lyon 1, CRNL U1028 UMR5292, 95 Boulevard Pinel, F-69500, Bron, France.
- Department of Education and Psychology, Faculty of Social Sciences, University of Atlántico Medio, Las Palmas de Gran Canaria, Spain.
- BML-NAP Research Group, ELTE Eötvös Loránd University & HUN-REN Research Centre for Natural Sciences, Damjanich utca 41, H-1072, Budapest, Hungary.
| |
Collapse
|
6
|
Colas JT, O’Doherty JP, Grafton ST. Active reinforcement learning versus action bias and hysteresis: control with a mixture of experts and nonexperts. PLoS Comput Biol 2024; 20:e1011950. [PMID: 38552190 PMCID: PMC10980507 DOI: 10.1371/journal.pcbi.1011950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 02/26/2024] [Indexed: 04/01/2024] Open
Abstract
Active reinforcement learning enables dynamic prediction and control, where one should not only maximize rewards but also minimize costs such as of inference, decisions, actions, and time. For an embodied agent such as a human, decisions are also shaped by physical aspects of actions. Beyond the effects of reward outcomes on learning processes, to what extent can modeling of behavior in a reinforcement-learning task be complicated by other sources of variance in sequential action choices? What of the effects of action bias (for actions per se) and action hysteresis determined by the history of actions chosen previously? The present study addressed these questions with incremental assembly of models for the sequential choice data from a task with hierarchical structure for additional complexity in learning. With systematic comparison and falsification of computational models, human choices were tested for signatures of parallel modules representing not only an enhanced form of generalized reinforcement learning but also action bias and hysteresis. We found evidence for substantial differences in bias and hysteresis across participants-even comparable in magnitude to the individual differences in learning. Individuals who did not learn well revealed the greatest biases, but those who did learn accurately were also significantly biased. The direction of hysteresis varied among individuals as repetition or, more commonly, alternation biases persisting from multiple previous actions. Considering that these actions were button presses with trivial motor demands, the idiosyncratic forces biasing sequences of action choices were robust enough to suggest ubiquity across individuals and across tasks requiring various actions. In light of how bias and hysteresis function as a heuristic for efficient control that adapts to uncertainty or low motivation by minimizing the cost of effort, these phenomena broaden the consilient theory of a mixture of experts to encompass a mixture of expert and nonexpert controllers of behavior.
Collapse
Affiliation(s)
- Jaron T. Colas
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, California, United States of America
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, California, United States of America
- Computation and Neural Systems Program, California Institute of Technology, Pasadena, California, United States of America
| | - John P. O’Doherty
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, California, United States of America
- Computation and Neural Systems Program, California Institute of Technology, Pasadena, California, United States of America
| | - Scott T. Grafton
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, California, United States of America
| |
Collapse
|
7
|
Potthoff RD, Schagen SB, Agelink van Rentergem JA. Process models of verbal memory in cancer survivors: Bayesian process modeling approach to variation in test scores. J Clin Exp Neuropsychol 2023; 45:705-714. [PMID: 38324475 DOI: 10.1080/13803395.2024.2313256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/25/2024] [Indexed: 02/09/2024]
Abstract
INTRODUCTION Verbal memory is a complex and fundamental aspect of human cognition. However, traditional sum-score analyses of verbal learning tests oversimplify underlying verbal memory processes. We propose using process models to subdivide memory into multiple processes, which helps in localizing the most affected processes in impaired verbal memory. Additionally, the model can be used to address score and process variability. This study aims to investigate the effects of cancer and its treatment on verbal memory, as well as provide a demonstration of how process models can be used to investigate the uncertainty in neuropsychological test scores. METHOD We present an investigation of memory process scores in non-CNS cancer survivors (n = 184) and no-cancer controls (n = 204). The participants completed the Amsterdam Cognition Scan (ACS), in which classical neuropsychological tests are digitally recreated for online at-home administration. We analyzed data from the ACS equivalent of a Verbal Learning Test using both traditional sum scores and a Bayesian process model. RESULTS Analysis of the sum score indicated that patients scored lower than controls on immediate recall but found no difference for delayed recall. The process model analysis indicated a small difference between patients and controls in immediate retrieval from both the partially learned and learned states, with no differences in learning or delayed retrieval processes. Individual-level analysis shows considerable uncertainty for sum scores. Sum scores were more certain than single trials. Retrieval parameters also showed less uncertainty than learning parameters. CONCLUSION The Bayesian process model increased the informativity of Verbal Learning test data, by showing uncertainty of the traditional sum score measurements as well as how the underlying processes differed between populations. Additionally, the model grants insight into underlying memory processes for individuals and how these processes vary within and between them.
Collapse
Affiliation(s)
- Ruben D Potthoff
- Department of Psychosocial Research and Epidemiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Sanne B Schagen
- Department of Psychosocial Research and Epidemiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | | |
Collapse
|