1
|
Zimny L, Schroeders U, Wilhelm O. Ant colony optimization for parallel test assembly. Behav Res Methods 2024; 56:5834-5848. [PMID: 38277085 PMCID: PMC11335849 DOI: 10.3758/s13428-023-02319-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/06/2023] [Indexed: 01/27/2024]
Abstract
Ant colony optimization (ACO) algorithms have previously been used to compile single short scales of psychological constructs. In the present article, we showcase the versatility of the ACO to construct multiple parallel short scales that adhere to several competing and interacting criteria simultaneously. Based on an initial pool of 120 knowledge items, we assembled three 12-item tests that (a) adequately cover the construct at the domain level, (b) follow a unidimensional measurement model, (c) allow reliable and (d) precise measurement of factual knowledge, and (e) are gender-fair. Moreover, we aligned the test characteristic and test information functions of the three tests to establish the equivalence of the tests. We cross-validated the assembled short scales and investigated their association with the full scale and covariates that were not included in the optimization procedure. Finally, we discuss potential extensions to metaheuristic test assembly and the equivalence of parallel knowledge tests in general.
Collapse
Affiliation(s)
- Luc Zimny
- Institute of Psychology and Education, Ulm University, Albert-Einstein-Allee 47, 89081, Ulm, Germany.
| | | | - Oliver Wilhelm
- Institute of Psychology and Education, Ulm University, Albert-Einstein-Allee 47, 89081, Ulm, Germany
| |
Collapse
|
2
|
Gnambs T, Lenhard W. Remote Testing of Reading Comprehension in 8-Year-Old Children: Mode and Setting Effects. Assessment 2024; 31:248-262. [PMID: 36890734 PMCID: PMC10822056 DOI: 10.1177/10731911231159369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Proctored remote testing of cognitive abilities in the private homes of test-takers is becoming an increasingly popular alternative to standard psychological assessments in test centers or classrooms. Because these tests are administered under less standardized conditions, differences in computer devices or situational contexts might contribute to measurement biases that impede fair comparisons between test-takers. Because it is unclear whether cognitive remote testing might be a feasible assessment approach for young children, the present study (N = 1,590) evaluated a test of reading comprehension administered to children at the age of 8 years. To disentangle mode from setting effects, the children finished the test either in the classroom on paper or computer or remotely on tablets or laptops. Analyses of differential response functioning found notable differences between assessment conditions for selected items. However, biases in test scores were largely negligible. Only for children with below-average reading comprehension small setting effects between on-site and remote testing were observed. Moreover, response effort was higher in the three computerized test versions, among which, reading on tablets most strongly resembled the paper condition. Overall, these results suggest that, on average, even for young children remote testing introduces little measurement bias.
Collapse
Affiliation(s)
- Timo Gnambs
- Leibniz Institute for Educational Trajectories, Bamberg, Germany
| | | |
Collapse
|
3
|
Brysbaert M. Designing and evaluating tasks to measure individual differences in experimental psychology: a tutorial. Cogn Res Princ Implic 2024; 9:11. [PMID: 38411837 PMCID: PMC10899130 DOI: 10.1186/s41235-024-00540-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 02/18/2024] [Indexed: 02/28/2024] Open
Abstract
Experimental psychology is witnessing an increase in research on individual differences, which requires the development of new tasks that can reliably assess variations among participants. To do this, cognitive researchers need statistical methods that many researchers have not learned during their training. The lack of expertise can pose challenges not only in designing good, new tasks but also in evaluating tasks developed by others. To bridge the gap, this article provides an overview of test psychology applied to performance tasks, covering fundamental concepts such as standardization, reliability, norming and validity. It provides practical guidelines for developing and evaluating experimental tasks, as well as for combining tasks to better understand individual differences. To further address common misconceptions, the article lists 11 prevailing myths. The purpose of this guide is to provide experimental psychologists with the knowledge and tools needed to conduct rigorous and insightful studies of individual differences.
Collapse
Affiliation(s)
- Marc Brysbaert
- Department of Experimental Psychology, Ghent University, 9000, Ghent, Belgium.
| |
Collapse
|
4
|
Franca M, Bolognini N, Brysbaert M. Seeing emotions in the eyes: a validated test to study individual differences in the perception of basic emotions. Cogn Res Princ Implic 2023; 8:67. [PMID: 37919608 PMCID: PMC10622392 DOI: 10.1186/s41235-023-00521-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 10/20/2023] [Indexed: 11/04/2023] Open
Abstract
People are able to perceive emotions in the eyes of others and can therefore see emotions when individuals wear face masks. Research has been hampered by the lack of a good test to measure basic emotions in the eyes. In two studies respectively with 358 and 200 participants, we developed a test to see anger, disgust, fear, happiness, sadness and surprise in images of eyes. Each emotion is measured with 8 stimuli (4 male actors and 4 female actors), matched in terms of difficulty and item discrimination. Participants reliably differed in their performance on the Seeing Emotions in the Eyes test (SEE-48). The test correlated well not only with Reading the Mind in the Eyes Test (RMET) but also with the Situational Test of Emotion Understanding (STEU), indicating that the SEE-48 not only measures low-level perceptual skills but also broader skills of emotion perception and emotional intelligence. The test is freely available for research and clinical purposes.
Collapse
Affiliation(s)
- Maria Franca
- Ph.D. Program in Neuroscience, School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | - Nadia Bolognini
- Department of Psychology and NeuroMI - Milan Centre for Neuroscience, University of Milano-Bicocca, Milan, Italy.
- Laboratory of Neuropsychology, Department of Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Via Mercalli 32, 20122, Milan, Italy.
| | - Marc Brysbaert
- Department of Experimental Psychology, Ghent University, H. Dunantlaan 2, 9000, Ghent, Belgium.
| |
Collapse
|
5
|
Hanfstingl B, Gnambs T, Fazekas C, Gölly KI, Matzer F, Tikvić M. The Dimensionality of the Brief COPE Before and During the COVID-19 Pandemic. Assessment 2023; 30:287-301. [PMID: 34654329 PMCID: PMC9902999 DOI: 10.1177/10731911211052483] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The Brief COPE (Coping Orientation to Problems Experienced) is a frequently used questionnaire assessing 14 theoretically derived coping mechanisms, but psychometric research has suggested inconsistent results concerning its factor structure. The aim of this study was to investigate primary and secondary order factor structures of the Brief COPE during the COVID-19 pandemic by testing 11 different models by confirmatory factor analyses and to assess differences between sex, age groups, and relationship status. Altogether, 529 respondents from Austria and Germany participated in a web-based survey. Results supported the originally hypothesized 14-factor structure but did not support previously described higher-order structures. However, bass-ackwards analyses suggested systematic overlap between different factors, which might have contributed to different factor solutions in previous research. Measurement invariance across sex, age groups, and relationship status could be confirmed. Findings suggest that cultural and situational aspects as well as the functional level should be considered in research on theoretical framing of coping behavior.
Collapse
Affiliation(s)
- Barbara Hanfstingl
- University of Klagenfurt, Klagenfurt am Wörthersee, Austria,Barbara Hanfstingl, Associate Professor, Institute of Instructional and School Development, University of Klagenfurt, Sterneckstrasse 15, Klagenfurt am Wörthersee 9020, Austria.
| | - Timo Gnambs
- Leibniz Institute for Educational Trajectories, Bamberg, Germany
| | | | | | | | - Matias Tikvić
- University of Klagenfurt, Klagenfurt am Wörthersee, Austria
| |
Collapse
|
6
|
Gc at its boundaries: A cross-national investigation of declarative knowledge. LEARNING AND INDIVIDUAL DIFFERENCES 2023. [DOI: 10.1016/j.lindif.2023.102267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
7
|
Kretzschmar A, Wagner L, Gander F, Hofmann J, Proyer RT, Ruch W. Character strengths and fluid intelligence. J Pers 2022; 90:1057-1069. [PMID: 35303763 PMCID: PMC9790612 DOI: 10.1111/jopy.12715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 02/28/2022] [Accepted: 03/04/2022] [Indexed: 12/30/2022]
Abstract
OBJECTIVE Research on the associations between cognitive and noncognitive personality traits has widely neglected character strengths, that means positively and morally valued personality traits that constitute good character. METHOD The present study aimed to bridge this gap by studying the associations between character strengths and fluid intelligence using different operationalizations of character strengths (including self- and informant-reports) and fluid intelligence in children, adolescents, and adults. RESULTS The results, based on four samples (N = 193/290/330/324), suggested that morally valued personality traits are independent of fluid intelligence, with the exception of love of learning, which showed small but robust positive relationships with fluid intelligence across all samples. CONCLUSIONS Nonetheless, we argue for further research on the associations with other cognitive abilities and interactions between character strengths and intelligence when examining their relationships with external criteria.
Collapse
Affiliation(s)
| | - Lisa Wagner
- Department of PsychologyUniversity of ZurichZurichSwitzerland,Jacobs Center for Productive Youth DevelopmentUniversity of ZurichZurichSwitzerland
| | - Fabian Gander
- Department of PsychologyUniversity of BaselBaselSwitzerland
| | - Jennifer Hofmann
- Department of Applied PsychologyUniversity of Applied Sciences ZurichZurichSwitzerland
| | - René T. Proyer
- Department of PsychologyMartin‐Luther University Halle‐WittenbergHalleGermany
| | - Willibald Ruch
- Department of PsychologyUniversity of ZurichZurichSwitzerland
| |
Collapse
|
8
|
Goecke B, Staab M, Schittenhelm C, Wilhelm O. Stop Worrying about Multiple-Choice: Fact Knowledge Does Not Change with Response Format. J Intell 2022; 10:102. [PMID: 36412782 PMCID: PMC9680349 DOI: 10.3390/jintelligence10040102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 10/11/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022] Open
Abstract
Declarative fact knowledge is a key component of crystallized intelligence. It is typically measured with multiple-choice (MC) items. Other response formats, such as open-ended formats are less frequently used, although these formats might be superior for measuring crystallized intelligence. Whereas MC formats presumably only require recognizing the correct response to a question, open-ended formats supposedly require cognitive processes such as searching for, retrieving, and actively deciding on a response from long-term memory. If the methods of inquiry alter the cognitive processes involved, mean-changes between methods for assessing declarative knowledge should come along with changes in the covariance structure. We tested these assumptions in two online studies administering declarative knowledge items in different response formats (MC, open-ended, and open-ended with cues). Item difficulty clearly increases in the open-ended methods although effects in logistic regression models vary slightly across items. Importantly, latent variable analyses suggest that the method of inquiry does not affect what is measured with different response formats. These findings clearly endorse the position that crystallized intelligence does not change as a function of the response format.
Collapse
Affiliation(s)
- Benjamin Goecke
- Institute for Psychology and Pedagogy, Ulm University, Albert-Einstein-Allee 47, 89081 Ulm, Germany
| | | | | | | |
Collapse
|
9
|
Steger D, Jankowsky K, Schroeders U, Wilhelm O. The Road to Hell Is Paved With Good Intentions: How Common Practices in Scale Construction Hurt Validity. Assessment 2022:10731911221124846. [PMID: 36176178 PMCID: PMC10363927 DOI: 10.1177/10731911221124846] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Sound scale construction is pivotal to the measurement of psychological constructs. Common item sampling procedures emphasize aspects of reliability to the disadvantage of aspects of validity, which are less tangible. We use a health knowledge test as an example to demonstrate how item sampling strategies that focus on either factor saturation or construct coverage influence scale composition and demonstrate how to find a trade-off between these two opposing needs. More specifically, we compile three 75-item health knowledge scales using Ant Colony Optimization, a metaheuristic algorithm that is inspired by the foraging behavior of ants, to optimize factor saturation, construct coverage, or a compromise of both. We demonstrate that our approach is well suited to balance out construct coverage and factor saturation when constructing a health knowledge test. Finally, we discuss conceptual problems with the modeling of declarative knowledge and provide recommendations for the assessment of health knowledge.
Collapse
|
10
|
Validated tests for language research with university students whose native language is English: Tests of vocabulary, general knowledge, author recognition, and reading comprehension. Behav Res Methods 2022; 55:1036-1068. [PMID: 35578105 DOI: 10.3758/s13428-022-01856-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/29/2022] [Indexed: 11/08/2022]
Abstract
We present five studies aimed at developing an L1 vocabulary test for English-speaking university students. Such a test is useful as an indicator of crystallized intelligence and because vocabulary size correlates well with reading comprehension. In the first study, we tested 100 written words with four answer alternatives, based on Nation's Vocabulary Size Test. Analysis suggested two factors, which we interpreted as the possible existence of two types of difficult words: unknown words for general knowledge and unknown words for specialized knowledge. In Study 2, we attempted to develop a vocabulary test for each type of word, and these tests were then validated in Study 3. Since the test for general words proved too easy for the target population, we improved it in a fourth study by creating and testing more difficult items. Finally, a fifth study was conducted to validate the new test. Unexpectedly, Study 5 found a high correlation (r = .82) between the general knowledge vocabulary test and the specialized knowledge vocabulary test, suggesting that they measure the same latent factor, contrary to our initial assumption. Both tests have high reliability (r > .85) and correlate well (r > .4) with general knowledge, author recognition, and reading comprehension. In addition, a collection of other language tests was used and improved to verify the validity of the vocabulary tests. An exploratory factor analysis of all tests identified three factors (text comprehension, crystallized intelligence, and reading speed), with the vocabulary tests loading on the factor crystallized intelligence, which in turn correlates with reading comprehension. Structural equation modeling confirmed the interpretation.
Collapse
|
11
|
Rozgonjuk D, Schmitz F, Kannen C, Montag C. Cognitive ability and personality: Testing broad to nuanced associations with a smartphone app. INTELLIGENCE 2021. [DOI: 10.1016/j.intell.2021.101578] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
12
|
Geiger M, Bärwaldt R, Wilhelm O. The Good, the Bad, and the Clever: Faking Ability as a Socio-Emotional Ability? J Intell 2021; 9:13. [PMID: 33806368 PMCID: PMC8006246 DOI: 10.3390/jintelligence9010013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 02/15/2021] [Accepted: 02/23/2021] [Indexed: 11/16/2022] Open
Abstract
Socio-emotional abilities have been proposed as an extension to models of intelligence, but earlier measurement approaches have either not fulfilled criteria of ability measurement or have covered only predominantly receptive abilities. We argue that faking ability-the ability to adjust responses on questionnaires to present oneself in a desired manner-is a socio-emotional ability that can broaden our understanding of these abilities and intelligence in general. To test this theory, we developed new instruments to measure the ability to fake bad (malingering) and administered them jointly with established tests of faking good ability in a general sample of n = 134. Participants also completed multiple tests of emotion perception along with tests of emotion expression posing, pain expression regulation, and working memory capacity. We found that individual differences in faking ability tests are best explained by a general factor that had a large correlation with receptive socio-emotional abilities and had a zero to medium-sized correlation with different productive socio-emotional abilities. All correlations were still small after controlling these effects for shared variance with general mental ability as indicated by tests of working memory capacity. We conclude that faking ability is indeed correlated meaningfully with other socio-emotional abilities and discuss the implications for intelligence research and applied ability assessment.
Collapse
Affiliation(s)
- Mattis Geiger
- Institute of Psychology and Education, Ulm University, 89069 Ulm, Germany;
| | - Romy Bärwaldt
- Department of Psychology, University of Münster, D-48149 Münster, Germany;
| | - Oliver Wilhelm
- Institute of Psychology and Education, Ulm University, 89069 Ulm, Germany;
| |
Collapse
|
13
|
Sindermann C, Schmitt HS, Rozgonjuk D, Elhai JD, Montag C. The evaluation of fake and true news: on the role of intelligence, personality, interpersonal trust, ideological attitudes, and news consumption. Heliyon 2021; 7:e06503. [PMID: 33869829 PMCID: PMC8035512 DOI: 10.1016/j.heliyon.2021.e06503] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 02/04/2021] [Accepted: 03/09/2021] [Indexed: 12/03/2022] Open
Abstract
Individual differences in cognitive abilities and personality help to understand individual differences in various human behaviors. Previous work investigated individual characteristics in light of believing (i.e., misclassifying) fake news. However, only little is known about the misclassification of true news as fake, although it appears equally important to correctly identify fake and true news for unbiased belief formation. An online study with N = 530 (n = 396 men) participants was conducted to investigate performance in a Fake and True News Test in association with i) performance in fluid and crystallized intelligence tests and the Big Five Inventory, and ii) news consumption as a mediating variable between individual characteristics and performance in the Fake and True News Test. Results showed that fluid intelligence was negatively correlated with believing fake news (the association did not remain significant in a regression model); crystallized intelligence was negatively linked to misclassifying true news. Extraversion was negatively and crystallized intelligence was positively associated with fake and true news discernment. The number of different news sources consumed correlated negatively with misclassifying true news and positively with fake and true news discernment. However, no meaningful mediation effect of news consumption was observed. Only interpersonal trust was negatively related to misclassifying both fake and true news as well as positively related to news discernment. The present findings reveal that underlying factors of believing fake news and misclassifying true news are mostly different. Strategies that might help to improve the abilities to identify both fake and true news based on the present findings are discussed.
Collapse
Affiliation(s)
- Cornelia Sindermann
- Department of Molecular Psychology, Institute of Psychology and Education, Ulm University, 89081 Ulm, Germany
| | - Helena Sophia Schmitt
- Department of Molecular Psychology, Institute of Psychology and Education, Ulm University, 89081 Ulm, Germany
| | - Dmitri Rozgonjuk
- Department of Molecular Psychology, Institute of Psychology and Education, Ulm University, 89081 Ulm, Germany
- Institute of Mathematics and Statistics, University of Tartu, Tartu, Estonia
| | - Jon D. Elhai
- Department of Psychology, and Department of Psychiatry, University of Toledo, Toledo, OH, USA
| | - Christian Montag
- Department of Molecular Psychology, Institute of Psychology and Education, Ulm University, 89081 Ulm, Germany
| |
Collapse
|
14
|
|
15
|
Goecke B, Weiss S, Steger D, Schroeders U, Wilhelm O. Testing competing claims about overclaiming. INTELLIGENCE 2020. [DOI: 10.1016/j.intell.2020.101470] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|