Haile TM, Prat CS, Stocco A. One Size Does Not Fit All: Idiographic Computational Models Reveal Individual Differences in Learning and Meta-Learning Strategies.
Top Cogn Sci 2024. [PMID:
38569120 DOI:
10.1111/tops.12730]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 03/07/2024] [Accepted: 03/18/2024] [Indexed: 04/05/2024]
Abstract
Complex skill learning depends on the joint contribution of multiple interacting systems: working memory (WM), declarative long-term memory (LTM) and reinforcement learning (RL). The present study aims to understand individual differences in the relative contributions of these systems during learning. We built four idiographic, ACT-R models of performance on the stimulus-response learning, Reinforcement Learning Working Memory task. The task consisted of short 3-image, and long 6-image, feedback-based learning blocks. A no-feedback test phase was administered after learning, with an interfering task inserted between learning and test. Our four models included two single-mechanism RL and LTM models, and two integrated RL-LTM models: (a) RL-based meta-learning, which selects RL or LTM to learn based on recent success, and (b) a parameterized RL-LTM selection model at fixed proportions independent of learning success. Each model was the best fit for some proportion of our learners (LTM: 68.7%, RL: 4.8%, Meta-RL: 13.25%, bias-RL:13.25% of participants), suggesting fundamental differences in the way individuals deploy basic learning mechanisms, even for a simple stimulus-response task. Finally, long-term declarative memory seems to be the preferred learning strategy for this task regardless of block length (3- vs 6-image blocks), as determined by the large number of subjects whose learning characteristics were best captured by the LTM only model, and a preference for LTM over RL in both of our integrated-models, owing to the strength of our idiographic approach.
Collapse