1
|
Li M, Zhang B, Mou Y. Though Forced, Still Valid: Examining the Psychometric Performance of Forced-Choice Measurement of Personality in Children and Adolescents. Assessment 2025; 32:521-543. [PMID: 38867477 DOI: 10.1177/10731911241255841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
Unveiling the roles personality plays during childhood and adolescence necessitates its accurate measurement, commonly using traditional Likert-type (LK) scales. However, this format is susceptible to various response biases, which can be particularly prevalent in children and adolescents, thus likely undermining measurement accuracy. Forced-choice (FC) scales appear to be a promising alternative because they are largely free from these biases by design. However, some argue that the FC format may not perform satisfactorily in children and adolescents due to its complexity. Little empirical evidence exists regarding the suitability of the FC format for children and adolescents. As such, the current study examined the psychometric performance of an FC measure of the Big Five personality factors in three children and adolescent samples: 5th to 6th graders (N = 428), 7th to 8th graders (N = 449), and 10th to 11th graders (N = 555). Across the three age groups, the FC scale demonstrated a better fit to the Big Five model and better discriminant validity in comparison to the LK counterpart. Personality scores from the FC scale also converged well with those from the LK scale and demonstrated high reliability as well as sizable criterion-related validity. Furthermore, the FC scale had more invariant statements than its LK counterpart across age groups. Overall, we found good evidence showing that FC measurement of personality is suitable for children and adolescents.
Collapse
Affiliation(s)
- Mengtong Li
- University of Illinois Urbana-Champaign, IL, USA
| | - Bo Zhang
- University of Illinois Urbana-Champaign, IL, USA
| | - Yi Mou
- Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
2
|
Graña DF, S Kreitchmann R, Abad FJ, Sorrel MA. Equally vs. unequally keyed blocks in forced-choice questionnaires: Implications on validity and reliability. J Pers Assess 2025; 107:392-405. [PMID: 39526652 DOI: 10.1080/00223891.2024.2420869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 09/20/2024] [Accepted: 10/11/2024] [Indexed: 11/16/2024]
Abstract
Forced-choice (FC) questionnaires have gained scientific interest over the last decades. However, the inclusion of unequally keyed item pairs in FC questionnaires remains a subject of debate, as there is evidence supporting both their usage and avoidance. Designing unequally keyed pairs may be more difficult when considering social desirability, as they might allow the identification of ideal responses. Nevertheless, they may enhance the reliability and the potential for normative interpretation of scores. To empirically investigate this topic, data were collected from 1,125 undergraduate Psychology students who completed a personality item pool measuring the Big Five personality traits in Likert-type format and two FC questionnaires (with and without unequally keyed pairs). These questionnaires were compared in terms of reliability, convergent and criterion validity, and ipsativity of the scores, along with insights on the construction process. While constructing questionnaires with unequally keyed blocks presented challenges in matching items on their social desirability, the differences observed in terms of reliability, validity, or ipsativity were sporadic and lacked systematic patterns. This suggests that neither questionnaire format exhibited a clear superiority. Given these results, it is recommended using only equally keyed blocks to minimize potential validity issues associated with response biases.
Collapse
Affiliation(s)
- Diego F Graña
- Department of Social Psychology and Methodology, Faculty of Psychology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Rodrigo S Kreitchmann
- Department of Methodology of Behavioral Sciences, Faculty of Psychology, Universidad Nacional de Educación a Distancia, Madrid, Spain
- IMIENS: Joint Research Institute, UNED-Health Institute Carlos III, Madrid, Spain
| | - Francisco J Abad
- Department of Social Psychology and Methodology, Faculty of Psychology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Miguel A Sorrel
- Department of Social Psychology and Methodology, Faculty of Psychology, Universidad Autónoma de Madrid, Madrid, Spain
| |
Collapse
|
3
|
Padgett C, Nguyen H, Cook PS, Hannon O, Doherty K, Ziebell J, Eccleston C. Revisiting the Common Misconceptions About Traumatic Brain Injury Scale (CM-TBI); What Does It Really Measure? J Head Trauma Rehabil 2025:00001199-990000000-00260. [PMID: 40203053 DOI: 10.1097/htr.0000000000001059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
OBJECTIVE To examine the factor structure and validity of the 40-item common misconceptions in traumatic brain injury (CM-TBI) scale, and to develop and evaluate additional concussion-focussed items to broaden the instrument's scope. METHOD A purposive sample of 988 participants from across all habitable continents (M age 43, range 16-90 years, 84% female) completed the CM-TBI and 5 additional concussion items at commencement of an online course on TBI. RESULTS Item analysis resulted in the removal of 19 items due to ambiguous wording and poor conceptual integrity, and/or low discrimination and low inter-item correlations. An exploratory factor analysis on the remaining 26 items revealed a 3-factor model had best fit, with an additional 8 items removed due to low or cross-loadings, low communalities, and/or low conceptual relevance, resulting in an 18-item revised scale. CONCLUSION There is no psychometric support for the current structure of the CM-TBI. This is likely due to changes in understanding of TBI since the scale's inception, and issues of conceptual ambiguity. It is also proposed that a distinction must be made between knowledge and misconceptions, as these are 2 related but different constructs that are not clearly delineated in the current CM-TBI. Using the revised scale here offers researchers a more modern, focussed, and valid measure, but a new scale to measure knowledge and misconceptions in TBI is urgently needed.
Collapse
Affiliation(s)
- Christine Padgett
- Author Affiliations: School of Psychological Sciences, College of Health and Medicine, University of Tasmania, Nipaluna/Hobart, Lutruwita/Tasmania, Australia (Dr Padgett and Ms Hannon); Wicking Dementia Research and Education Centre, College of Health and Medicine, University of Tasmania, Nipaluna/Hobart, Lutruwita/Tasmania, Australia (Drs Nguyen, Doherty, Ziebell, and Eccleston); and School of Social Sciences, College of Arts, Law and Education, University of Tasmania, Nipaluna/Hobart, Lutruwita/Tasmania, Australia (A/Prof Cook)
| | | | | | | | | | | | | |
Collapse
|
4
|
Li Z, Li L, Zhang B, Cao M, Tay L. Killing Two Birds with One Stone: Accounting for Unfolding Item Response Process and Response Styles Using Unfolding Item Response Tree Models. MULTIVARIATE BEHAVIORAL RESEARCH 2025; 60:161-183. [PMID: 39215711 DOI: 10.1080/00273171.2024.2394607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Two research streams on responses to Likert-type items have been developing in parallel: (a) unfolding models and (b) individual response styles (RSs). To accurately understand Likert-type item responding, it is vital to parse unfolding responses from RSs. Therefore, we propose the Unfolding Item Response Tree (UIRTree) model. First, we conducted a Monte Carlo simulation study to examine the performance of the UIRTree model compared to three other models - Samejima's Graded Response Model, Generalized Graded Unfolding Model, and Dominance Item Response Tree model, for Likert-type responses. Results showed that when data followed an unfolding response process and contained RSs, AIC was able to select the UIRTree model, while BIC was biased toward the DIRTree model in many conditions. In addition, model parameters in the UIRTree model could be accurately recovered under realistic conditions, and mis-specifying item response process or wrongly ignoring RSs was detrimental to the estimation of key parameters. Then, we used datasets from empirical studies to show that the UIRTree model could fit personality datasets well and produced more reasonable parameter estimates compared to competing models. A strong presence of RS(s) was also revealed by the UIRTree model. Finally, we provided examples with R code for UIRTree model estimation to facilitate the modeling of responses to Likert-type items in future studies.
Collapse
Affiliation(s)
- Zhaojun Li
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Lingyue Li
- Department of Psychology, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Bo Zhang
- Department of Psychology, University of Illinois Urbana-Champaign, Urbana, IL, USA
- School of Labor and Employment Relations, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | | | - Louis Tay
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
5
|
Zheng C, Liu J, Li Y, Xu P, Zhang B, Wei R, Zhang W, Liu B, Huang J. A 2PLM-RANK multidimensional forced-choice model and its fast estimation algorithm. Behav Res Methods 2024; 56:6363-6388. [PMID: 38409459 DOI: 10.3758/s13428-023-02315-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/06/2023] [Indexed: 02/28/2024]
Abstract
High-stakes non-cognitive tests frequently employ forced-choice (FC) scales to deter faking. To mitigate the issue of score ipsativity derived, many scoring models have been devised. Among them, the multi-unidimensional pairwise preference (MUPP) framework is a highly flexible and commonly used framework. However, the original MUPP model was developed for unfolding response process and can only handle paired comparisons. The present study proposes the 2PLM-RANK as a generalization of the MUPP model to accommodate dominance RANK format response. In addition, an improved stochastic EM (iStEM) algorithm is devised for more stable and efficient parameter estimation. Simulation results generally supported the efficiency and utility of the new algorithm in estimating the 2PLM-RANK when applied to both triplets and tetrads across various conditions. An empirical illustration with responses to a 24-dimensional personality test further supported the practicality of the proposed model. To further aid in the application of the new model, a user-friendly R package is also provided.
Collapse
Affiliation(s)
- Chanjin Zheng
- Department of Educational Psychology, Faculty of Education, East China Normal University, Shanghai, China.
| | - Juan Liu
- Beijing Insight Online Management Consulting Co.,Ltd, Beijing, China
| | - Yaling Li
- Beijing Insight Online Management Consulting Co.,Ltd, Beijing, China
| | - Peiyi Xu
- Department of Educational Psychology, Faculty of Education, East China Normal University, Shanghai, China
- Beijing Insight Online Management Consulting Co.,Ltd, Beijing, China
| | - Bo Zhang
- School of Labor and Employment Relations and Department of Psychology, University of Illinois Urbana-Champaign, Champaign, USA
| | - Ran Wei
- Beijing Insight Online Management Consulting Co.,Ltd, Beijing, China
| | - Wenqing Zhang
- Department of Educational Psychology, Faculty of Education, East China Normal University, Shanghai, China
- Beijing Insight Online Management Consulting Co.,Ltd, Beijing, China
| | - Boyang Liu
- Beijing Insight Online Management Consulting Co.,Ltd, Beijing, China
| | - Jing Huang
- Educational Psychology and Research Methodology, Purdue University, West Lafayette, IN, USA
| |
Collapse
|