1
|
Pedersen MK, Díaz CMC, Wang QJ, Alba-Marrugo MA, Amidi A, Basaiawmoit RV, Bergenholtz C, Christiansen MH, Gajdacz M, Hertwig R, Ishkhanyan B, Klyver K, Ladegaard N, Mathiasen K, Parsons C, Rafner J, Villadsen AR, Wallentin M, Zana B, Sherson JF. Measuring Cognitive Abilities in the Wild: Validating a Population-Scale Game-Based Cognitive Assessment. Cogn Sci 2023; 47:e13308. [PMID: 37354036 DOI: 10.1111/cogs.13308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 04/26/2023] [Accepted: 06/05/2023] [Indexed: 06/26/2023]
Abstract
Rapid individual cognitive phenotyping holds the potential to revolutionize domains as wide-ranging as personalized learning, employment practices, and precision psychiatry. Going beyond limitations imposed by traditional lab-based experiments, new efforts have been underway toward greater ecological validity and participant diversity to capture the full range of individual differences in cognitive abilities and behaviors across the general population. Building on this, we developed Skill Lab, a novel game-based tool that simultaneously assesses a broad suite of cognitive abilities while providing an engaging narrative. Skill Lab consists of six mini-games as well as 14 established cognitive ability tasks. Using a popular citizen science platform (N = 10,725), we conducted a comprehensive validation in the wild of a game-based cognitive assessment suite. Based on the game and validation task data, we constructed reliable models to simultaneously predict eight cognitive abilities based on the users' in-game behavior. Follow-up validation tests revealed that the models can discriminate nuances contained within each separate cognitive ability as well as capture a shared main factor of generalized cognitive ability. Our game-based measures are five times faster to complete than the equivalent task-based measures and replicate previous findings on the decline of certain cognitive abilities with age in our large cross-sectional population sample (N = 6369). Taken together, our results demonstrate the feasibility of rapid in-the-wild systematic assessment of cognitive abilities as a promising first step toward population-scale benchmarking and individualized mental health diagnostics.
Collapse
Affiliation(s)
- Mads Kock Pedersen
- Center for Hybrid Intelligence, Department of Management, Aarhus University
- Department of Business Development and Technology, Aarhus University
| | | | - Qian Janice Wang
- Center for Hybrid Intelligence, Department of Management, Aarhus University
- Department of Food Science, Aarhus University
| | | | - Ali Amidi
- Department of Psychology and Behavioural Sciences, Aarhus University
| | | | | | - Morten H Christiansen
- Department of Psychology, Cornell University
- School of Communication and Culture, Aarhus University
- Interacting Minds Centre, Aarhus University
| | - Miroslav Gajdacz
- Center for Hybrid Intelligence, Department of Management, Aarhus University
| | - Ralph Hertwig
- Center for Adaptive Rationality, Max Planck Institute for Human Development
| | | | - Kim Klyver
- Department of Entrepreneurship & Relationship Management, University of Southern Denmark
- Entrepreneurship, Commercialization and Innovation Centre (ECIC), University of Adelaide
| | - Nicolai Ladegaard
- Department of Clinical Medicine - Department of Affective Disorders, Aarhus University Hospital
| | - Kim Mathiasen
- Department of Clinical Medicine - Department of Affective Disorders, Aarhus University Hospital
| | | | - Janet Rafner
- Center for Hybrid Intelligence, Department of Management, Aarhus University
| | | | - Mikkel Wallentin
- School of Communication and Culture, Aarhus University
- Interacting Minds Centre, Aarhus University
| | - Blanka Zana
- Center for Hybrid Intelligence, Department of Management, Aarhus University
| | - Jacob F Sherson
- Center for Hybrid Intelligence, Department of Management, Aarhus University
- School of Communication and Culture, Aarhus University
| |
Collapse
|
2
|
Walker R. The Opportunity Cost of Compulsory Research Participation: Why Psychology Departments Should Abolish Involuntary Participant Pools. Sci Eng Ethics 2020; 26:2835-2847. [PMID: 32533447 DOI: 10.1007/s11948-020-00232-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Accepted: 06/04/2020] [Indexed: 06/11/2023]
Abstract
Psychology departments often require undergraduates to participate in faculty and graduate research as part of their course or face a penalty. Involuntary participant pools (human subject pools) in which students are compulsorily enrolled are objectively coercive. Students have less autonomy than other research participants because they face a costly alternative task or the penalties that accompany failure to meet a course requirement if they choose not to participate. By contrast, other research participants are free to refuse consent without cost or penalty. Some researchers claim that the educational value of participation justifies the requirement. They treat coercion as a cost that can be outweighed by the benefits to students. This paper argues that such an approach is flawed because coercion is not like other costs and that educational value is inherently low relative to personal study or classroom time. The unethical nature of involuntary participation is best demonstrated with an opportunity cost analysis. This shows that students are forced to sacrifice higher value alternatives that they have paid to do and undertake a lower value activity that principally benefits others. Faculty have a conflict of interest as they are the beneficiaries of student coercion in their role as researchers and responsible for student achievement in their role as teachers. Voluntary participant pools can resolve this conflict but at the cost of reducing the supply of participants. A change in departmental research conduct is required to restore the autonomy of students who are competent adults and not legitimate subjects of paternalism when it comes to research participation.
Collapse
Affiliation(s)
- Ruth Walker
- Philosophy Programme, University of Waikato, Hamilton, New Zealand.
| |
Collapse
|
3
|
Osgood JM, Kase SE, Zaroukian EG, Quartana PJ. Online Intervention Reduces Hostile Attribution Bias, Anger, Aggressive Driving, and Cyber-Aggression, Results of Two Randomized Trials. Cogn Ther Res 2021; 45:310-21. [DOI: 10.1007/s10608-020-10147-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
4
|
Abstract
Once a fixture of research in the social and behavioral sciences, volunteer subjects are now only rarely used in human subjects research. Yet volunteers are a potentially valuable resource, especially for research conducted online. We argue that online volunteer laboratories are able to produce high-quality data comparable to that from other online pools. The scalability of volunteer labs means that they can produce large volumes of high-quality data for multiple researchers, while imposing little or no financial burden. Using a range of original tests, we show that volunteer and paid respondents have different motivations for participating in research, but have similar descriptive compositions. Furthermore, volunteer samples are able to replicate classic and contemporary social science findings, and produce high levels of overall response quality comparable to paid subjects. Our results suggest that online volunteer labs represent a potentially significant untapped source of human subjects data.
Collapse
Affiliation(s)
- Austin M. Strange
- Institute for Quantitative Social Science and Department of Government, Harvard University, Cambridge, MA, United States of America
- * E-mail:
| | - Ryan D. Enos
- Institute for Quantitative Social Science and Department of Government, Harvard University, Cambridge, MA, United States of America
| | - Mark Hill
- Institute for Quantitative Social Science and Department of Government, Harvard University, Cambridge, MA, United States of America
| | - Amy Lakeman
- Institute for Quantitative Social Science and Department of Government, Harvard University, Cambridge, MA, United States of America
| |
Collapse
|
5
|
Vicens J, Perelló J, Duch J. Citizen Social Lab: A digital platform for human behavior experimentation within a citizen science framework. PLoS One 2018; 13:e0207219. [PMID: 30521566 PMCID: PMC6283465 DOI: 10.1371/journal.pone.0207219] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Accepted: 10/26/2018] [Indexed: 11/25/2022] Open
Abstract
Cooperation is one of the behavioral traits that define human beings, however we are still trying to understand why humans cooperate. Behavioral experiments have been largely conducted to shed light into the mechanisms behind cooperation-and other behavioral traits. However, most of these experiments have been conducted in laboratories with highly controlled experimental protocols but with limitations in terms of subject pool or decisions' context, which limits the reproducibility and the generalization of the results obtained. In an attempt to overcome these limitations, some experimental approaches have moved human behavior experimentation from laboratories to public spaces, where behaviors occur naturally, and have opened the participation to the general public within the citizen science framework. Given the open nature of these environments, it is critical to establish the appropriate data collection protocols to maintain the same data quality that one can obtain in the laboratories. In this article we introduce Citizen Social Lab, a software platform designed to be used in the wild using citizen science practices. The platform allows researchers to collect data in a more realistic context while maintaining the scientific rigor, and it is structured in a modular and scalable way so it can also be easily adapted for online or brick-and-mortar experimental laboratories. Following citizen science guidelines, the platform is designed to motivate a more general population into participation, but also to promote engaging and learning of the scientific research process. We also review the main results of the experiments performed using the platform up to now, and the set of games that each experiment includes. Finally, we evaluate some properties of the platform, such as the heterogeneity of the samples of the experiments, the satisfaction level of participants, or the technical parameters that demonstrate the robustness of the platform and the quality of the data collected.
Collapse
Affiliation(s)
- Julián Vicens
- Departament d’Enginyeria Informàtica i Matemàtiques, Universitat Rovira i Virgili, Tarragona, Spain
- Universitat de Barcelona Institute of Complex Systems UBICS, Universitat de Barcelona, Barcelona, Spain
- Departament de Física de la Matèria Condensada, Universitat de Barcelona, Barcelona, Spain
| | - Josep Perelló
- Universitat de Barcelona Institute of Complex Systems UBICS, Universitat de Barcelona, Barcelona, Spain
- Departament de Física de la Matèria Condensada, Universitat de Barcelona, Barcelona, Spain
| | - Jordi Duch
- Departament d’Enginyeria Informàtica i Matemàtiques, Universitat Rovira i Virgili, Tarragona, Spain
| |
Collapse
|