1
|
Barron LG, Aguilar ID, Rose MR, Carretta TR. Development of a situational judgment test to supplement current US air force measures of officership. MILITARY PSYCHOLOGY 2024; 36:33-48. [PMID: 38193873 PMCID: PMC10790796 DOI: 10.1080/08995605.2021.1997500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 09/08/2021] [Indexed: 10/19/2022]
Abstract
Aptitude requirements for US Air Force officer commissioning include completion of a college degree and minimum scores on the Air Force Officer Qualifying Test (AFOQT) Verbal and Quantitative composites. Although the AFOQT has demonstrated predictive validity for officer training, the Air Force has striven to improve predictive validity and diversity. To this end, a Situational judgment Test (SJT) was added to the AFOQT in 2015. SJT development was consistent with recommendations to broaden the competencies assessed by the AFOQT with the goal of providing incremental validity, while reducing adverse impact for historically underrepresented groups. To ensure content validity and realism, SJT development was based on competencies identified in a large-scale analysis of officership and input from junior officers in scenario and response generation and scoring. Psychometric evaluations have affirmed its potential benefits for inclusion on the AFOQT. An initial study showed the SJT to be perceived as highly face valid regardless of whether it was presented as a paper-and-pencil test (with narrative or scripted scenarios) or in a video-based format. Preliminary studies demonstrated criterion-related validity within small USAF samples, and a larger Army cadet sample. Additionally, operational administration of the SJT since 2015 has demonstrated its potential for improving diversity (i.e., reduced adverse impact relative to the AFOQT Verbal and Quantitative composites). Predictive validation studies with larger Air Force officer accession samples are ongoing to assess the incremental validity of the SJT beyond current AFOQT composites for predicting important outcomes across accession sources.
Collapse
Affiliation(s)
| | | | - Mark R. Rose
- Air Education and Training Command, San Antonio, Texas, USA
| | | |
Collapse
|
2
|
Allen M, Russell T, Ford L, Carretta T, Lee A, Kirkendall C. Identification and evaluation of criterion measurement methods. MILITARY PSYCHOLOGY 2023; 35:308-320. [PMID: 37352453 PMCID: PMC10291913 DOI: 10.1080/08995605.2022.2050165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/13/2022] [Indexed: 10/18/2022]
Abstract
Criterion measures vary greatly in terms of their psychometric quality and ease of use. This paper serves two purposes. First, it provides a general summary of different approaches to criterion measurement in a military context. Second, it provides an extensive review of 16 specific types of criterion measurement methods (e.g., job performance rating scales, self-report questionnaires, job knowledge tests) on nine psychometric and ease-of-use evaluation factors. Eight criterion measurement experts read a summary of extant research and made ratings to evaluate each measurement method on the evaluation factors. Rater intra-class correlations (ICCs) were high, ranging from .75 to .95 across the evaluation dimensions with a median of .91. Data showed a quality-feasibility tradeoff, where criterion data that are easy to obtain often have technical flaws. Recommendations for military services and future directions in criterion measurement (e.g., applications of machine learning) are discussed.
Collapse
Affiliation(s)
- Matthew Allen
- Human Resources Research Organization, Alexandria, Virginia, USA
| | | | - Laura Ford
- Human Resources Research Organization, Alexandria, Virginia, USA
| | | | - Angela Lee
- Human Resources Research Organization, Alexandria, Virginia, USA
| | - Cristina Kirkendall
- U.S. Army Research Institute for the Behavioral and Social Sciences, Fort Belvoir, Virginia, USA
| |
Collapse
|
3
|
Gebhardt DL, Baker TA. Designing criterion measures for physically demanding jobs. MILITARY PSYCHOLOGY 2023; 35:335-350. [PMID: 37352446 PMCID: PMC10291931 DOI: 10.1080/08995605.2022.2063008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 04/01/2022] [Indexed: 10/18/2022]
Abstract
The Bureau of Labor Statistics reported that 39.1% of the civilian workforce in the United States performs physically demanding jobs that require lifting, carrying, pushing/pulling, kneeling, stooping, crawling, and climbing activities in varied environmental conditions. United States military occupations are similar to those in the civilian sector involving equipment installation, emergency rescues, and maintenance, along with combat arms occupations. This article provides an overview of the types of criterion measures used to assess the physical domain and approaches for designing and evaluating the criteria. It also includes a method for generating criterion measures that are applicable across multiple jobs.
Collapse
Affiliation(s)
| | - Todd A. Baker
- Human Resources Research Organization, Alexandria, VA, USA
| |
Collapse
|
4
|
Fikrat‐Wevers S, Stegers‐Jager K, Groenier M, Koster A, Ravesloot JH, Van Gestel R, Wouters A, van den Broek W, Woltman A. Applicant perceptions of selection methods for health professions education: Rationales and subgroup differences. MEDICAL EDUCATION 2023; 57:170-185. [PMID: 36215062 PMCID: PMC10092456 DOI: 10.1111/medu.14949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 09/26/2022] [Accepted: 10/05/2022] [Indexed: 06/16/2023]
Abstract
CONTEXT Applicant perceptions of selection methods can affect motivation, performance and withdrawal and may therefore be of relevance in the context of widening access. However, it is unknown how applicant subgroups perceive different selection methods. OBJECTIVES Using organisational justice theory, the present multi-site study examined applicant perceptions of various selection methods, rationales behind perceptions and subgroup differences. METHODS Applicants to five Dutch undergraduate health professions programmes (N = 704) completed an online survey including demographics and a questionnaire on applicant perceptions applied to 11 commonly used selection methods. Applicants rated general favourability and justice dimensions (7-point Likert scale) and could add comments for each method. RESULTS Descriptive statistics revealed a preference for selection methods on which applicants feel more 'in control': General favourability ratings were highest for curriculum-sampling tests (mean [M] = 5.32) and skills tests (M = 5.13), while weighted lottery (M = 3.05) and unweighted lottery (M = 2.97) were perceived least favourable. Additionally, applicants preferred to distinguish themselves on methods that assess attributes beyond cognitive abilities. Qualitative content analysis of comments revealed several conflicting preferences, including a desire for multiple selection methods versus concerns of experiencing too much stress. Results from a linear mixed model of general favourability indicated some small subgroup differences in perceptions (based on gender, migration background, prior education and parental education), but practical meaning of these differences was negligible. Nevertheless, concerns were expressed that certain selection methods can hinder equitable admission due to inequal access to resources. CONCLUSIONS Our findings illustrate that applicants desire to demonstrate a variety of attributes on a combination of selection tools, but also observe that this can result in multiple drawbacks. The present study can help programmes in deciding which selection methods to include, which more negatively perceived methods should be better justified to applicants, and how to adapt methods to meet applicants' needs.
Collapse
Affiliation(s)
- Suzanne Fikrat‐Wevers
- Institute of Medical Education Research RotterdamErasmus MC, University Medical Center RotterdamRotterdamThe Netherlands
| | - Karen Stegers‐Jager
- Institute of Medical Education Research RotterdamErasmus MC, University Medical Center RotterdamRotterdamThe Netherlands
| | - Marleen Groenier
- Technical Medical Centre, Technical MedicineUniversity of TwenteEnschedeThe Netherlands
| | - Andries Koster
- Department of Pharmaceutical SciencesUtrecht UniversityUtrechtThe Netherlands
| | - Jan Hindrik Ravesloot
- Department of Medical Biology, Amsterdam UMCUniversity of AmsterdamAmsterdamThe Netherlands
| | - Renske Van Gestel
- Department of Pharmaceutical SciencesUtrecht UniversityUtrechtThe Netherlands
| | - Anouk Wouters
- Faculty of Medicine VUAmsterdam UMC location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
- LEARN! Research Institute for Learning and Education, Faculty of Psychology and EducationVU University AmsterdamAmsterdamThe Netherlands
| | - Walter van den Broek
- Institute of Medical Education Research RotterdamErasmus MC, University Medical Center RotterdamRotterdamThe Netherlands
| | - Andrea Woltman
- Institute of Medical Education Research RotterdamErasmus MC, University Medical Center RotterdamRotterdamThe Netherlands
| |
Collapse
|
5
|
Leutner F, Codreanu SC, Brink S, Bitsakis T. Game based assessments of cognitive ability in recruitment: Validity, fairness and test-taking experience. Front Psychol 2023; 13:942662. [PMID: 36743642 PMCID: PMC9891208 DOI: 10.3389/fpsyg.2022.942662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 12/20/2022] [Indexed: 01/19/2023] Open
Abstract
Gamification and machine learning are emergent technologies in recruitment, promising to improve the user experience and fairness of assessments. We test this by validating a game based assessment of cognitive ability with a machine learning based scoring algorithm optimised for validity and fairness. We use applied data from 11,574 assessment completions. The assessment has convergent validity (r = 0.5) and test-retest reliability (r = 0.68). It maintains fairness in a separate sample of 3,107 job applicants, showing that fairness-optimised machine learning can improve outcome parity issues with cognitive ability tests in recruitment settings. We show that there are no significant gender differences in test taking anxiety resulting from the games, and that anxiety does not directly predict game performance, supporting the notion that game based assessments help with test taking anxiety. Interactions between anxiety, gender and performance are explored. Feedback from 4,778 job applicants reveals a Net Promoter score of 58, indicating more applicants support than dislike the assessment, and that games deliver a positive applicant experience in practise. Satisfaction with the format is high, but applicants raise face validity concerns over the abstract games. We encourage the use of gamification and machine learning to improve the fairness and user experience of psychometric tests.
Collapse
Affiliation(s)
- Franziska Leutner
- Institute of Management Studies, Goldsmiths, University of London, London, United Kingdom,HireVue, Inc, Salt Lake City, UT, United States,*Correspondence: Franziska Leutner,
| | | | | | | |
Collapse
|
6
|
Baguley T, Dunham G, Steer O. Statistical modelling of vignette data in psychology. Br J Psychol 2022; 113:1143-1163. [PMID: 35735658 PMCID: PMC9796090 DOI: 10.1111/bjop.12577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 01/20/2022] [Accepted: 05/03/2022] [Indexed: 12/30/2022]
Abstract
Vignette methods are widely used in psychology and the social sciences to obtain responses to multi-dimensional scenarios or situations. Where quantitative data are collected this presents challenges to the selection of an appropriate statistical model. This depends on subtle details of the design and allocation of vignettes to participants. A key distinction is between factorial survey experiments where each participant receives a different allocation of vignettes from the full universe of possible vignettes and experimental vignette studies where this restriction is relaxed. The former leads to nested designs with a single random factor and the latter to designs with two crossed random factors. In addition, the allocation of vignettes to participants may lead to fractional or unbalanced designs and a consequent loss of efficiency or aliasing of the effects of interest. Many vignette studies (including some factorial survey experiments) include unmodeled heterogeneity between vignettes leading to potentially serious problems if traditional regression approaches are adopted. These issues are reviewed and recommendations are made for the efficient design of vignette studies including the allocation of vignettes to participants. Multilevel models are proposed as a general approach to handling nested and crossed designs including unbalanced and fractional designs. This is illustrated with a small vignette data set looking at judgements of online and offline bullying and harassment.
Collapse
|
7
|
Cullen MJ, Zhang C, Sackett PR, Thakker K, Young JQ. Can a Situational Judgment Test Identify Trainees at Risk of Professionalism Issues? A Multi-Institutional, Prospective Cohort Study. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1494-1503. [PMID: 35612909 DOI: 10.1097/acm.0000000000004756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE To determine whether overall situational judgment test (SJT) scores are associated with programs' clinical competency committee (CCC) ratings of trainee professionalism, any concerning behavior, and concerning behavior requiring active remediation at 2 time periods. METHOD In fall 2019, trainees from 17 U.S. programs (16 residency, 1 fellowship) took an online 15-scenario SJT developed to measure 7 dimensions of professionalism. CCC midyear and year-end (6 months and 1 year following SJT completion, respectively) professionalism scores and concern ratings were gathered for academic year 2019-2020. Analyses were conducted to determine whether overall SJT scores related to overall professionalism ratings, trainees displaying any concerns, and trainees requiring active remediation at both time periods. RESULTS Overall SJT scores correlated positively with midyear and year-end overall professionalism ratings ( r = .21 and .14, P < .001 and = .03, respectively). Holding gender and race/ethnicity constant, a 1 standard deviation (SD) increase in overall SJT score was associated with a .20 SD increase in overall professionalism ratings at midyear ( P = .005) and a .22 SD increase at year-end ( P = .001). Holding gender and race/ethnicity constant, a 1 SD increase in overall SJT score decreased the odds of a trainee displaying any concerns by 37% (odds ratio [OR] 95% confidence interval [CI]: [.44, .87], P = .006) at midyear and 34% (OR 95% CI: [.46, .95], P = .025) at year-end and decreased the odds of a trainee requiring active remediation by 51% (OR 95% CI: [.25, .90], P = .02) at midyear. CONCLUSIONS Overall SJT scores correlated positively with midyear and year-end overall professionalism ratings and were associated with whether trainees exhibited any concerning behavior at midyear and year-end and whether trainees needed active remediation at midyear. Future research should investigate whether other potential professionalism measures are associated with concerning trainee behavior.
Collapse
Affiliation(s)
- Michael J Cullen
- M.J. Cullen is senior director of assessment, evaluation, and research for graduate medical education, University of Minnesota Medical School, Minneapolis, Minnesota; ORCID: https://orcid.org/0000-0002-4755-4276
| | - Charlene Zhang
- C. Zhang was a graduate student, Industrial/Organizational Psychology Program, University of Minnesota-Twin Cities, Minneapolis, Minnesota, at the time of the study. The author is now a research scientist, Amazon, Alexandria, Virginia; ORCID: http://orcid.org/0000-0001-6975-5653
| | - Paul R Sackett
- P.R. Sackett is professor of psychology, Industrial/Organizational Psychology Program, University of Minnesota-Twin Cities, Minneapolis, Minnesota; ORCID: http://orcid.org/0000-0001-7633-4160
| | - Krima Thakker
- K. Thakker is research coordinator, Zucker Hillside Hospital, Northwell Health, Glen Oaks, New York; ORCID: https://orcid.org/0000-0002-1737-2113
| | - John Q Young
- J.Q. Young is professor and chair, Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, and senior vice president, behavioral health, Northwell Health, Lake Success, New York; ORCID: https://orcid.org/0000-0003-2219-5657
| |
Collapse
|
8
|
Golubovich J, Ryan AM. Implications of diversity cues in recruitment and assessment materials: Reactions and performance. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2022. [DOI: 10.1111/ijsa.12401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
9
|
Reiser S, Schacht L, Thomm E, Figalist C, Janssen L, Schick K, Dörfler E, Berberat PO, Gartmeier M, Bauer J. A video-based situational judgement test of medical students' communication competence in patient encounters: Development and first evaluation. PATIENT EDUCATION AND COUNSELING 2022; 105:1283-1289. [PMID: 34481676 DOI: 10.1016/j.pec.2021.08.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 06/22/2021] [Accepted: 08/20/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE We developed and evaluated the Video-Based Assessment of Medical Communication Competence (VA-MeCo), a construct-driven situational judgement test measuring medical students' communication competence in patient encounters. METHODS In the construction phase, we conducted two expert studies (npanel1 = 6, npanel2 = 13) to ensure curricular and content validity and sufficient expert agreement on the answer key. In the evaluation phase, we conducted a cognitive pre-test (n = 12) and a pilot study (n = 117) with medical students to evaluate test usability and acceptance, item statistics and test reliability depending on the applied scoring method (raw consensus vs. pairwise comparison scoring). RESULTS The results of the expert interviews indicated good curricular and content validity. Expert agreement on the answer key was high (ICCs> .86). The pilot study showed favourable usability and acceptance by students. Irrespective of the scoring method, reliability for the complete test (Cronbach's α >.93) and its subscales (α >.83) was high. CONCLUSION There is promising evidence that medical communication competence can be validly and reliably measured using a construct-driven and video-based situational judgement test. PRACTICE IMPLICATIONS Video-based SJTs allow efficient online assessment of medical communication competence and are well accepted by students and educators.
Collapse
Affiliation(s)
- Sabine Reiser
- University of Erfurt, Educational Research and Methodology, Erfurt, Germany.
| | - Laura Schacht
- University of Erfurt, Educational Research and Methodology, Erfurt, Germany
| | - Eva Thomm
- University of Erfurt, Educational Research and Methodology, Erfurt, Germany
| | - Christina Figalist
- Technical University of Munich, TUM School of Medicine, TUM Medical Education Center, Munich, Germany
| | - Laura Janssen
- Technical University of Munich, TUM School of Medicine, TUM Medical Education Center, Munich, Germany
| | - Kristina Schick
- Technical University of Munich, TUM School of Medicine, TUM Medical Education Center, Munich, Germany
| | - Eva Dörfler
- Technical University of Munich, ProLehre | Media and Didactics, Munich, Germany
| | - Pascal O Berberat
- Technical University of Munich, TUM School of Medicine, TUM Medical Education Center, Munich, Germany
| | - Martin Gartmeier
- Technical University of Munich, TUM School of Medicine, TUM Medical Education Center, Munich, Germany
| | - Johannes Bauer
- University of Erfurt, Educational Research and Methodology, Erfurt, Germany
| |
Collapse
|
10
|
Brown MI, Speer AB, Tenbrink AP, Chabris CF. Using game-like animations of geometric shapes to simulate social interactions: An evaluation of group score differences. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2022; 30:167-181. [PMID: 35935096 PMCID: PMC9355331 DOI: 10.1111/ijsa.12375] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
This study introduces a novel, game-like method for measuring social intelligence: the Social Shapes Test. Unlike other existing video or game-based tests, the Shapes Test uses animations of abstract shapes to represent social interactions. We explore demographic differences in Shapes Test scores compared to a written situational judgment test. Gender and race/ethnicity only had meaningful effects on written SJT scores while no effects were found for Shapes Test scores. This pattern of results remained after controlling for general mental ability and English language exposure. We also found metric invariance between demographic groups for both tests. Our results demonstrate the potential for using animated shape tasks as an alternative to written SJTs when designing future game-based assessments.
Collapse
Affiliation(s)
- Matt I. Brown
- Geisinger Health System, Autism and Developmental Medicine Institute, Lewisburg, PA
| | - Andrew B. Speer
- Wayne State University, Department of Psychology, Detroit, MI
| | | | | |
Collapse
|
11
|
Wu FY, Mulfinger E, Alexander L, Sinclair AL, McCloy RA, Oswald FL. Individual differences at play: An investigation into measuring Big Five personality facets with game‐based assessments. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2021. [DOI: 10.1111/ijsa.12360] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Felix Y. Wu
- Department of Psychological Sciences Rice University Houston Texas USA
| | - Evan Mulfinger
- Department of Psychological Sciences Rice University Houston Texas USA
| | - Leo Alexander
- Department of Psychological Sciences Rice University Houston Texas USA
| | | | - Rodney A. McCloy
- Human Resources Research Organization (HumRRO) Alexandria Virginia USA
| | | |
Collapse
|
12
|
Georgiou K. Can explanations improve applicant reactions towards gamified assessment methods? INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2021. [DOI: 10.1111/ijsa.12329] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Konstantina Georgiou
- Department of Management Science and Technology School of Business Athens University of Economics and Business Athens Greece
| |
Collapse
|
13
|
Bardach L, Rushby JV, Klassen RM. The selection gap in teacher education: Adverse effects of ethnicity, gender, and socio-economic status on situational judgement test performance. BRITISH JOURNAL OF EDUCATIONAL PSYCHOLOGY 2021; 91:1015-1034. [PMID: 33501677 PMCID: PMC8451885 DOI: 10.1111/bjep.12405] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 12/14/2020] [Indexed: 11/26/2022]
Abstract
Background Situational judgement tests (SJTs) measure non‐cognitive attributes and have recently drawn attention as a selection method for initial teacher education programmes. To date, very little is known about adverse impact in teacher selection SJT performance. Aims This study aimed to shed light on adverse effects of gender, ethnicity, and socio‐economic status (SES) on SJT scores, by exploring both main effects and interactions, and considering both overall SJT performance and separate SJT domain scores (mindset, emotion regulation, and conscientiousness). Sample A total of 2,808 prospective teachers from the United Kingdom completed the SJTs as part of the initial stage of selection into a teacher education programme. Methods In addition to SJT scores, the variables gender (female vs. male), ethnicity (majority group vs. minority group), and home SES background (higher SES status vs. lower SES status) were used in the analyses. Regression models and moderated regression models were employed. Results and conclusions Results from the regression models revealed that gender effects (females scoring higher than males) were restricted to emotion regulation, while ethnicity effects (ethnic majority group members scoring higher than ethnic minority group members) emerged for SJT overall scores and all three domains. Moderated regression modelling results furthermore showed significant interactions (gender and ethnicity) for SJT overall scores and two domains. Considering the importance of reducing subgroup differences in selection test scores to ensure equal access to teacher education, this study’s findings are a critical contribution. The partially differentiated results for overall vs. domain‐specific scores point towards the promise of applying a domain‐level perspective in research on teacher selection SJTs.
Collapse
Affiliation(s)
- Lisa Bardach
- University of Tübingen, Hector Research Institute of Education Sciences and Psychology, Germany
| | | | | |
Collapse
|
14
|
Liu Y, Hau KT. Measuring Motivation to Take Low-Stakes Large-Scale Test: New Model Based on Analyses of "Participant-Own-Defined" Missingness. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2020; 80:1115-1144. [PMID: 33116329 PMCID: PMC7565118 DOI: 10.1177/0013164420911972] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models. Traditional approaches based on questionnaires and item response theory may have different limitations. In the present research, we proposed a new way by directly using "participant-own-defined" missing item information (user missingness) in a zero-inflated Poisson model. An empirical study using the PISA 2015 data (eight representative economies in two cultures) and another simulation study were conducted to validate our new approach. Results indicated that our model could successfully capture test-taking motivation. We also found that the Confucian students had lower user missingness irrespective of item positions as compared with their Western counterparts.
Collapse
Affiliation(s)
- Yuan Liu
- Southwest University, Chongqing,
People’s Republic of China
| | - Kit-Tai Hau
- The Chinese University of Hong Kong,
Hong Kong, People’s Republic of China
| |
Collapse
|
15
|
Lievens F, Sackett PR, Zhang C. Personnel selection: a longstanding story of impact at the individual, firm, and societal level. EUROPEAN JOURNAL OF WORK AND ORGANIZATIONAL PSYCHOLOGY 2020. [DOI: 10.1080/1359432x.2020.1849386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Filip Lievens
- Lee Kong Chian School of Business, Singapore Management University, Singapore, Singapore
| | - Paul R. Sackett
- Depazrtment of Psychology, University of Minnesota–Twin Cities, Minneapolis, MN, USA
| | - Charlene Zhang
- Depazrtment of Psychology, University of Minnesota–Twin Cities, Minneapolis, MN, USA
| |
Collapse
|
16
|
Arthur W, Keiser NL, Atoba OA, Cho I, Edwards BD. Does the use of alternative predictor methods reduce subgroup differences? It depends on the construct. HUMAN RESOURCE MANAGEMENT 2020. [DOI: 10.1002/hrm.22027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Winfred Arthur
- Department of Psychological and Brain Sciences Texas A&M University College Station Texas USA
| | - Nathanael L. Keiser
- Department of Psychological and Brain Sciences Texas A&M University College Station Texas USA
| | - Olabisi A. Atoba
- Department of Psychological and Brain Sciences Texas A&M University College Station Texas USA
| | - Inchul Cho
- Department of Psychological and Brain Sciences Texas A&M University College Station Texas USA
| | - Bryan D. Edwards
- Department of Management, Spears School of Business Oklahoma State University Stillwater Oklahoma USA
| |
Collapse
|
17
|
Weiner EJ, Sanchez DR. Cognitive ability in virtual reality: Validity evidence for VR game‐based assessments. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2020. [DOI: 10.1111/ijsa.12295] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
| | - Diana R. Sanchez
- Psychology Department San Francisco State University San Francisco CA USA
| |
Collapse
|
18
|
Gkorezis P, Georgiou K, Nikolaou I, Kyriazati A. Gamified or traditional situational judgement test? A moderated mediation model of recommendation intentions via organizational attractiveness. EUROPEAN JOURNAL OF WORK AND ORGANIZATIONAL PSYCHOLOGY 2020. [DOI: 10.1080/1359432x.2020.1746827] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Panagiotis Gkorezis
- Assistant Professor of Management, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Konstantina Georgiou
- Post-doctoral Fellow, Athens University of Economics and Business, Athens, Greece
| | - Ioannis Nikolaou
- Associate Professor of Organizational Psychology, Athens University of Economics and Business, Athens, Greece
| | - Anna Kyriazati
- Post-graduate Student, Athens University of Economics and Business, Athens, Greece
| |
Collapse
|
19
|
Bardach L, Rushby JV, Kim LE, Klassen RM. Using video- and text-based situational judgement tests for teacher selection: a quasi-experiment exploring the relations between test format, subgroup differences, and applicant reactions. EUROPEAN JOURNAL OF WORK AND ORGANIZATIONAL PSYCHOLOGY 2020. [DOI: 10.1080/1359432x.2020.1736619] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Lisa Bardach
- Department of Education, University of York, York, UK
| | | | - Lisa E. Kim
- Department of Education, University of York, York, UK
| | | |
Collapse
|
20
|
Landers RN, Auer EM, Abraham JD. Gamifying a situational judgment test with immersion and control game elements. JOURNAL OF MANAGERIAL PSYCHOLOGY 2020. [DOI: 10.1108/jmp-10-2018-0446] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeAssessment gamification, which refers to the addition of game elements to existing assessments, is commonly implemented to improved applicant reactions to existing psychometric measures. This study aims to understand the effects of gamification on applicant reactions to and measurement quality of situational judgment tests.Design/methodology/approachIn a 2 × 4 between-subjects experiment, this study randomly assigned 315 people to experience different versions of a gamified situational judgment test, crossing immersive game elements (text, audio, still pictures, video) with control game elements (high and low), measuring applicant reactions and assessing differences in convergent validity between conditions.FindingsThe use of immersive game elements improved perceptions of organizational technological sophistication, but no other reactions outcomes (test attitudes, procedural justice, organizational attractiveness). Convergent validity with cognitive ability was not affected by gamification.Originality/valueThis is the first study to experimentally examine applicant reactions and measurement quality to SJTs based upon the implementation of specific game elements. It demonstrates that small-scale efforts to gamify assessments are likely to lead to only small-scale gains. However, it also demonstrates that such modifications can be done without harming the measurement qualities of the test, making gamification a potentially useful marketing tool for assessment specialists. Thus, this study concludes that utility should be considered carefully and explicitly for any attempt to gamify assessment.
Collapse
|
21
|
Noethen D, Alcazar R. Experimental research in expatriation and its challenges: A literature review and recommendations. GERMAN JOURNAL OF HUMAN RESOURCE MANAGEMENT-ZEITSCHRIFT FUR PERSONALFORSCHUNG 2020. [DOI: 10.1177/2397002220908424] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Via a systematic literature review, this article draws attention to the alarming scarcity of experimental studies and the ensuing shortness of evidence for causality in the field of expatriate management. Only 17 articles could be identified, published over more than 20 years, which utilize randomized experiments or quasi-experiments on topics of expatriation. Moreover, these articles show specific patterns, such as dealing exclusively with pre-departure and on-assignment issues, or, in their majority, sampling individuals who interact with expatriates rather than expatriates themselves. This lack of experimental studies is problematic, as it is difficult to establish causality between different variables without conducting experimental studies. Yet many critical issues in expatriation are precisely questions of causality. Hence, in this article, we provide resources to help move the expatriation field toward a more balanced use of different research methodologies and, thus, a greater understanding of the many relationships uncovered in past research. First, we identify four main challenges unique to conducting experimental research in the context of expatriation: Challenging data access, global sample dispersion, restricted manipulability of variables, and cultural boundedness of constructs and interpretations. Second, we provide strategies to overcome these challenges, based on studies included in the review as well as taking ideas from neighboring fields such as cross-cultural psychology. The article concludes with a discussion of how experimental research can take the field of expatriation forward and improve the decision-making process of practitioners managing international assignees.
Collapse
|
22
|
Schäpers P, Lievens F, Freudenstein J, Hüffmeier J, König CJ, Krumm S. Removing situation descriptions from situational judgment test items: Does the impact differ for video‐based versus text‐based formats? JOURNAL OF OCCUPATIONAL AND ORGANIZATIONAL PSYCHOLOGY 2019. [DOI: 10.1111/joop.12297] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Philipp Schäpers
- Lee Kong Chian School of Business Singapore Management University Singapore
| | - Filip Lievens
- Lee Kong Chian School of Business Singapore Management University Singapore
| | | | | | | | - Stefan Krumm
- Institute of Psychology Freie Universität Berlin Germany
| |
Collapse
|
23
|
Herde CN, Lievens F, Jackson DJR, Shalfrooshan A, Roth PL. Subgroup differences in situational judgment test scores: Evidence from large applicant samples. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2019. [DOI: 10.1111/ijsa.12269] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Christoph N. Herde
- Department of Personnel Management, Work and Organizational Psychology Ghent University Ghent Belgium
| | - Filip Lievens
- Lee Kong Chian School of Business Singapore Management University Singapore Singapore
| | - Duncan J. R. Jackson
- King’s Business School King’s College London London UK
- Department of Industrial Psychology University of the Western Cape Bellville South Africa
| | | | - Philip L. Roth
- Department of Management, College of Business Clemson University Clemson South Carolina
| |
Collapse
|
24
|
Simmering VR, Ou L, Bolsinova M. What Technology Can and Cannot Do to Support Assessment of Non-cognitive Skills. Front Psychol 2019; 10:2168. [PMID: 31607992 PMCID: PMC6774043 DOI: 10.3389/fpsyg.2019.02168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 09/09/2019] [Indexed: 12/03/2022] Open
Abstract
Advances in technology hold great promise for expanding what assessments may achieve across domains. We focus on non-cognitive skills as our domain, but lessons can be extended to other domains for both the advantages and drawbacks of new technological approaches for different types of assessments. We first briefly review the limitations of traditional assessments of non-cognitive skills. Next, we discuss specific examples of technological advances, considering whether and how they can address such limitations, followed by remaining and new challenges introduced by incorporating technology into non-cognitive assessments. We conclude by noting that technology will not always improve assessments over traditional methods and that careful consideration must be given to the advantages and limitations of each type of assessment relative to the goals and needs of the assessor. The domain of non-cognitive assessments in particular remains limited by lack of agreement and clarity on some constructs and their relations to observable behavior (e.g., self-control versus -regulation versus -discipline), and until these theoretical limitations must be overcome to realize the full benefit of incorporating technology into assessments.
Collapse
Affiliation(s)
| | - Lu Ou
- ACTNext by ACT, Inc., Iowa City, IA, United States
| | | |
Collapse
|
25
|
Ryan AM, Derous E. The Unrealized Potential of Technology in Selection Assessment. REVISTA DE PSICOLOGÍA DEL TRABAJO Y DE LAS ORGANIZACIONES 2019. [DOI: 10.5093/jwop2019a10] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
26
|
Hiemstra AMF, Oostrom JK, Derous E, Serlie AW, Born MP. Applicant Perceptions of Initial Job Candidate Screening With Asynchronous Job Interviews. JOURNAL OF PERSONNEL PSYCHOLOGY 2019. [DOI: 10.1027/1866-5888/a000230] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Applicant fairness perceptions of asynchronous job interviews were assessed among panelists (Study 1, N = 160) and highly educated actual applicants (Study 2, N = 103). Furthermore, we also examined whether personality explained applicants’ perceptions. Participants, particularly actual applicants, had negative perceptions of the fairness and procedural justice of asynchronous job interviews. Extraverted applicants perceived more opportunity to perform with the asynchronous job interview than introverts. A trait interaction between Neuroticism and Extraversion was tested, but no significant results were found. Although the first selection stage is increasingly digitized, this study shows that applicant perceptions of asynchronous job interviews are relatively negative. The influence of personality on these perceptions appears to be limited.
Collapse
Affiliation(s)
- Annemarie M. F. Hiemstra
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, The Netherlands
| | - Janneke K. Oostrom
- Department of Management and Organization, Vrije Universiteit Amsterdam, The Netherlands
| | - Eva Derous
- Department of Personnel Management, Work and Organizational Psychology, Ghent University, Ghent, Belgium
| | - Alec W. Serlie
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, The Netherlands
- GITP, Utrecht, The Netherlands
| | - Marise Ph. Born
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, The Netherlands
| |
Collapse
|
27
|
Kaminski K, Felfe J, Schäpers P, Krumm S. A closer look at response options: Is judgment in situational judgment tests a function of the desirability of response options? INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2019. [DOI: 10.1111/ijsa.12233] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Katarina Kaminski
- Work & Organizational Psychology Helmut‐Schmidt‐Universität Hamburg Hamburg Germany
| | - Jörg Felfe
- Work & Organizational Psychology Helmut‐Schmidt‐Universität Hamburg Hamburg Germany
| | - Philipp Schäpers
- Lee Kong Chian School of Business Singapore Management University Singapore Singapore
| | - Stefan Krumm
- Institute of Psychology Freie Universität Berlin Berlin Germany
| |
Collapse
|
28
|
Olaru G, Burrus J, MacCann C, Zaromb FM, Wilhelm O, Roberts RD. Situational Judgment Tests as a method for measuring personality: Development and validity evidence for a test of Dependability. PLoS One 2019; 14:e0211884. [PMID: 30811463 PMCID: PMC6392235 DOI: 10.1371/journal.pone.0211884] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Accepted: 01/23/2019] [Indexed: 11/18/2022] Open
Abstract
Situational Judgment Tests (SJTs) are criterion valid low fidelity measures that have gained much popularity as predictors of job performance. A broad variety of SJTs have been studied, but SJTs measuring personality are still rare. Personality traits such as Conscientiousness are valid predictors of many educational, work and life-related outcomes and SJTs are less prone to faking than classical self-report measurements. We developed an SJT measure of Dependability, a core facet of Conscientiousness, by gathering critical incidents in semi-structured interviews using the construct definition of Dependability as a prompt. We examined the psychometric properties of the newly developed SJTs across two studies (N = 546 general population; N = 440 sales professionals). The internal validity of the SJTs was examined by correlating the SJT scores with related self-report measures of Dependability and Conscientiousness, as well as testing the unidimensionality of the measure with CFA. Additionally, we specified a bi-factor model of SJT, self-report and behavioral checklist measures of Dependability accounting for common and specific measurement variance. External validity was examined by correlating the SJT scale and specific factor with work-related outcomes. The results show that the Dependability SJTs with an expert based scoring procedure were psychometrically sound and correlated moderately to highly with traditional self-report measures of Dependability and Conscientiousness. However, a large proportion of SJT variance cannot be accounted for by personality alone. This supports the notion that SJTs measure general domain knowledge about the effectiveness of personality-related behaviors. We conclude that SJT measures of personality can be a promising addition to classical self-report assessments and can be used in a wide variety of applications beyond measurement and selection, for instance as formative assessments of personality.
Collapse
Affiliation(s)
| | - Jeremy Burrus
- American College Testing, Iowa City, Iowa, United States of America
| | | | - Franklin M. Zaromb
- National Authority for Measurement and Evaluation in Education, Ramat Gan, Israel
| | | | - Richard D. Roberts
- Rad Science Solution, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
29
|
Herde CN, Lievens F, Solberg EG, Harbaugh JL, Strong MH, J. Burkholder G. Situational Judgment Tests as Measures of 21st Century Skills: Evidence across Europe and Latin America. REVISTA DE PSICOLOGÍA DEL TRABAJO Y DE LAS ORGANIZACIONES 2019. [DOI: 10.5093/jwop2019a8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
30
|
De Leng WE, Stegers-Jager KM, Born MP, Themmen APN. Influence of response instructions and response format on applicant perceptions of a situational judgement test for medical school selection. BMC MEDICAL EDUCATION 2018; 18:282. [PMID: 30477494 PMCID: PMC6258459 DOI: 10.1186/s12909-018-1390-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Accepted: 11/14/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND This study examined the influence of two Situational Judgement Test (SJT) design features (response instructions and response format) on applicant perceptions. Additionally, we investigated demographic subgroup differences in applicant perceptions of an SJT. METHODS Medical school applicants (N = 372) responded to an online survey on applicant perceptions, including a description and two example items of an SJT. Respondents randomly received one of four SJT versions (should do-rating, should do-pick-one, would do-rating, would do-pick-one). They rated overall favourability and items on four procedural justice factors (face validity, applicant differentiation, study relatedness and chance to perform) and ease-of-cheating. Additionally, applicant perceptions were compared for subgroups based on gender, ethnic background and first-generation university status. RESULTS Applicants rated would-do instructions as easier to cheat than should-do instructions. Rating formats received more favourable judgements than pick-one formats on applicant differentiation, study-relatedness, chance to perform and ease of cheating. No significant main effect for demographic subgroup on applicant perceptions was found, but significant interaction effects showed that certain subgroups might have more pronounced preferences for certain SJT design features. Specifically, ethnic minority applicants - but not ethnic majority applicants - showed greater preference for should-do than would-do instructions. Additionally, first-generation university students - but not non-first-generation university students - were more favourable of rating formats than of pick-one formats. CONCLUSIONS Findings indicate that changing SJT design features may positively affect applicant perceptions by promoting procedural justice factors and reducing perceived ease of cheating and that response instructions and response format can increase the attractiveness of SJTs for minority applicants.
Collapse
Affiliation(s)
- Wendy E. De Leng
- Institute of Medical Education Research Rotterdam, Erasmus MC, IMERR, Room AE-227, PO Box 2040, 3000CA Rotterdam, the Netherlands
| | - Karen M. Stegers-Jager
- Institute of Medical Education Research Rotterdam, Erasmus MC, IMERR, Room AE-227, PO Box 2040, 3000CA Rotterdam, the Netherlands
| | - Marise Ph. Born
- Institute of Psychology, Erasmus University Rotterdam, PO Box 1738, 3000DR Rotterdam, the Netherlands
| | - Axel P. N. Themmen
- Institute of Medical Education Research Rotterdam, Erasmus MC, IMERR, Room AE-227, PO Box 2040, 3000CA Rotterdam, the Netherlands
| |
Collapse
|
31
|
de Leng WE, Stegers‐Jager KM, Born MP, Themmen APN. Integrity situational judgement test for medical school selection: judging 'what to do' versus 'what not to do'. MEDICAL EDUCATION 2018; 52:427-437. [PMID: 29349804 PMCID: PMC5901405 DOI: 10.1111/medu.13498] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Revised: 05/10/2017] [Accepted: 10/23/2017] [Indexed: 06/07/2023]
Abstract
CONTEXT Despite their widespread use in medical school selection, there remains a lack of clarity on exactly what situational judgement tests (SJTs) measure. OBJECTIVES We aimed to develop an SJT that measures integrity by combining critical incident interviews (inductive approach) with an innovative deductive approach. The deductive approach guided the development of the SJT according to two established theoretical models, of which one was positively related to integrity (honesty-humility [HH]) and one was negatively related to integrity (cognitive distortions [CD]). The Integrity SJT covered desirable (HH-based) and undesirable (CD-based) response options. We examined the convergent and discriminant validity of the Integrity SJT and compared the validity of the HH-based and CD-based subscores. METHODS The Integrity SJT was administered to 402 prospective applicants at a Dutch medical school. The Integrity SJT consisted of 57 scenarios, each followed by four response options, of which two represented HH facets and two represented CD categories. Three SJT scores were computed, including a total, an HH-based and a CD-based score. The validity of these scores was examined according to their relationships with external integrity-related measures (convergent validity) and self-efficacy (discriminant validity). RESULTS The three SJT scores correlated significantly with all integrity-related measures and not with self-efficacy, indicating convergent and discriminant validity. In addition, the CD-based SJT score correlated significantly more strongly than the HH-based SJT score with two of the four integrity-related measures. CONCLUSIONS An SJT that assesses the ability to correctly recognise CD-based response options as inappropriate (i.e. what one should not do) seems to have stronger convergent validity than an SJT that assesses the ability to correctly recognise HH-based response options as appropriate (i.e. what one should do). This finding might be explained by the larger consensus on what is considered inappropriate than on what is considered appropriate in a challenging situation. It may be promising to focus an SJT on the ability to recognise what one should not do.
Collapse
Affiliation(s)
- Wendy E de Leng
- Institute of Medical Education Research RotterdamErasmus Medical CentreRotterdamThe Netherlands
| | - Karen M Stegers‐Jager
- Institute of Medical Education Research RotterdamErasmus Medical CentreRotterdamThe Netherlands
| | - Marise Ph Born
- Department of PsychologyErasmus University RotterdamRotterdamThe Netherlands
| | - Axel P N Themmen
- Institute of Medical Education Research RotterdamErasmus Medical CentreRotterdamThe Netherlands
- Department of Internal MedicineErasmus Medical CentreRotterdamThe Netherlands
| |
Collapse
|
32
|
Weng Q(D, Yang H, Lievens F, McDaniel MA. Optimizing the validity of situational judgment tests: The importance of scoring methods. JOURNAL OF VOCATIONAL BEHAVIOR 2018. [DOI: 10.1016/j.jvb.2017.11.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
33
|
Arthur W, Keiser NL, Doverspike D. An Information-Processing-Based Conceptual Framework of the Effects of Unproctored Internet-Based Testing Devices on Scores on Employment-Related Assessments and Tests. HUMAN PERFORMANCE 2017. [DOI: 10.1080/08959285.2017.1403441] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
34
|
Trejo BC, Richard EM, van Driel M, McDonald DP. Cross-Cultural Competence: The Role of Emotion Regulation Ability and Optimism. MILITARY PSYCHOLOGY 2017. [DOI: 10.1037/mil0000081] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
| | | | - Marinus van Driel
- Defense Equal Opportunity Management Institute, Patrick Air Force Base, Florida
| | - Daniel P. McDonald
- Defense Equal Opportunity Management Institute, Patrick Air Force Base, Florida
| |
Collapse
|
35
|
Matošková J, Kovářík M. Development of a Situational Judgment Test as a Predictor of College Student Performance. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2017. [DOI: 10.1177/0734282916661663] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
It has been suggested that tacit knowledge may be a good predictor of performance in college. The purpose of this study was to investigate the extent to which a situational judgment test developed to measure tacit knowledge correlates with predictors and indicators of college performance. This situational judgment test includes eight situations relevant to the life of college (undergraduate) students and is comprised of 211 behavioral strategies. Four hundred forty-eight college students participated in the study. The results of this study suggest that tacit knowledge has small, statistically nonsignificant correlations with cumulative grade point average (GPA), the percentage of the academic requirements passed on the first attempt, cognitive abilities, achievement motivation, and attention. However, tacit knowledge was found to correlate moderately with the personality factor of agreeableness. The findings do not support claims about the importance of tacit knowledge in academic settings and question what tacit knowledge really is and if it is a useful construct for performance prediction.
Collapse
|
36
|
Oostrom JK, Köbis NC, Ronay R, Cremers M. False consensus in situational judgment tests: What would others do? JOURNAL OF RESEARCH IN PERSONALITY 2017. [DOI: 10.1016/j.jrp.2017.09.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
37
|
A Call for Conceptual Models of Technology in I-O Psychology: An Example From Technology-Based Talent Assessment. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2017. [DOI: 10.1017/iop.2017.70] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The rate of technological change is quickly outpacing today's methods for understanding how new advancements are applied within industrial-organizational (I-O) psychology. To further complicate matters, specific attempts to explain observed differences or measurement equivalence across devices are often atheoretical or fail to explain why a technology should (or should not) affect the measured construct. As a typical example, understanding how technology influences construct measurement in personnel testing and assessment is critical for explaining or predicting other practical issues such as accessibility, security, and scoring. Therefore, theory development is needed to guide research hypotheses, manage expectations, and address these issues at this intersection of technology and I-O psychology. This article is an extension of a Society for Industrial and Organizational Psychology (SIOP) 2016 panel session, which (re)introduces conceptual frameworks that can help explain how and why measurement equivalence or nonequivalence is observed in the context of selection and assessment. We outline three potential conceptual frameworks as candidates for further research, evaluation, and application, and argue for a similar conceptual approach for explaining how technology may influence other psychological phenomena.
Collapse
|
38
|
W. Tang R. Watch his deed or examine his words? Exploring the potential of the behavioral experiment method for collecting data to measure culture. CROSS CULTURAL & STRATEGIC MANAGEMENT 2017. [DOI: 10.1108/ccsm-10-2016-0175] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
To address three issues of survey-based methods (i.e. the absence of behaviors, the reference inequivalence, and the lack of cross-cultural interaction), the purpose of this paper is to explore the potential of using the behavioral experiment method to collect cross-cultural data as well as the possibility of measuring culture with the experimental data. Moreover, challenges to this method and possible solutions are elaborated for intriguing further discussion on the use of behavioral experiments in international business/international management (IB/IM) research.
Design/methodology/approach
This paper illustrates the merits and downside of the proposed method with an ultimate-game experiment conducted in a behavioral laboratory. The procedure of designing, implementing, and analyzing the behavioral experiment is delineated in detail.
Findings
The exploratory findings show that the ultimate-game experiment may observe participants’ behaviors with comparable references and allow for cross-cultural interaction. The findings also suggest that the fairness-related cultural value may be calibrated with the horizontal and vertical convergence of cross-cultural behaviors (i.e. people’s deed), and this calibration may be strengthened by incorporating complementary methods such as a background survey to include people’s words.
Originality/value
The behavioral experiment method illustrated and discussed in this study contributes to the IB/IM literature by addressing three methodological issues that are not widely recognized in the IB/IM literature.
Collapse
|
39
|
De Leng WE, Stegers-Jager KM, Husbands A, Dowell JS, Born MP, Themmen APN. Scoring method of a Situational Judgment Test: influence on internal consistency reliability, adverse impact and correlation with personality? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:243-265. [PMID: 27757558 DOI: 10.1007/s10459-016-9720-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Accepted: 10/06/2016] [Indexed: 05/16/2023]
Abstract
Situational Judgment Tests (SJTs) are increasingly used for medical school selection. Scoring an SJT is more complicated than scoring a knowledge test, because there are no objectively correct answers. The scoring method of an SJT may influence the construct and concurrent validity and the adverse impact with respect to non-traditional students. Previous research has compared only a small number of scoring methods and has not studied the effect of scoring method on internal consistency reliability. This study compared 28 different scoring methods for a rating SJT on internal consistency reliability, adverse impact and correlation with personality. The scoring methods varied on four aspects: the way of controlling for systematic error, and the type of reference group, distance and central tendency statistic. All scoring methods were applied to a previously validated integrity-based SJT, administered to 931 medical school applicants. Internal consistency reliability varied between .33 and .73, which is likely explained by the dependence of coefficient alpha on the total score variance. All scoring methods led to significantly higher scores for the ethnic majority than for the non-Western minorities, with effect sizes ranging from 0.48 to 0.66. Eighteen scoring methods showed a significant small positive correlation with agreeableness. Four scoring methods showed a significant small positive correlation with conscientiousness. The way of controlling for systematic error was the most influential scoring method aspect. These results suggest that the increased use of SJTs for selection into medical school must be accompanied by a thorough examination of the scoring method to be used.
Collapse
Affiliation(s)
- W E De Leng
- Institute of Medical Education Research Rotterdam (iMERR), Erasmus MC, Room AE-239, PO Box 2040, 3000 CA, Rotterdam, The Netherlands.
| | - K M Stegers-Jager
- Institute of Medical Education Research Rotterdam (iMERR), Erasmus MC, Room AE-239, PO Box 2040, 3000 CA, Rotterdam, The Netherlands
| | - A Husbands
- Medical School, University of Buckingham, Buckingham, UK
| | - J S Dowell
- School of Medicine, University of Dundee, Dundee, UK
| | - M Ph Born
- Department of Psychology, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - A P N Themmen
- Institute of Medical Education Research Rotterdam (iMERR), Erasmus MC, Room AE-239, PO Box 2040, 3000 CA, Rotterdam, The Netherlands
- Department of Internal Medicine, Erasmus MC, Rotterdam, The Netherlands
| |
Collapse
|
40
|
Skolarus LE, Mazor KM, Sánchez BN, Dome M, Biller J, Morgenstern LB. Development and Validation of a Bilingual Stroke Preparedness Assessment Instrument. Stroke 2017; 48:1020-1025. [PMID: 28250199 DOI: 10.1161/strokeaha.116.015107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2016] [Revised: 12/19/2016] [Accepted: 01/09/2017] [Indexed: 11/16/2022]
Abstract
BACKGROUND AND PURPOSE Stroke preparedness interventions are limited by the lack of psychometrically sound intermediate end points. We sought to develop and assess the reliability and validity of the video-Stroke Action Test (video-STAT) an English and a Spanish video-based test to assess people's ability to recognize and react to stroke signs. METHODS Video-STAT development and testing was divided into 4 phases: (1) video development and community-generated response options, (2) pilot testing in community health centers, (3) administration in a national sample, bilingual sample, and neurologist sample, and (4) administration before and after a stroke preparedness intervention. RESULTS The final version of the video-STAT included 8 videos: 4 acute stroke/emergency, 2 prior stroke/nonemergency, 1 nonstroke/emergency, and 1 nonstroke/nonemergency. Acute stroke recognition and action response were queried after each vignette. Video-STAT scoring was based on the acute stroke vignettes only (score range 0-12 best). The national sample consisted of 598 participants, 438 who took the video-STAT in English and 160 who took the video-STAT in Spanish. There was adequate internal consistency (Cronbach α=0.72). The average video-STAT score was 5.6 (SD=3.6), whereas the average neurologist score was 11.4 (SD=1.3). There was no difference in video-STAT scores between the 116 bilingual video-STAT participants who took the video-STAT in English or Spanish. Compared with baseline scores, the video-STAT scores increased after a stroke preparedness intervention (6.2 versus 8.9, P<0.01) among a sample of 101 black adults and youth. CONCLUSIONS The video-STAT yields reliable scores that seem to be valid measures of stroke preparedness.
Collapse
Affiliation(s)
- Lesli E Skolarus
- From the Stroke Program (L.E.S., M.D., L.B.M.) and Department of Biostatistics (B.N.S), University of Michigan, Ann Arbor; Meyers Primary Care Institute, University of Massachusetts Medical School, Worcester (K.M.M.); and Department of Neurology, Loyola University Chicago, Stritch School of Medicine, Maywood, IL (J.B.).
| | - Kathleen M Mazor
- From the Stroke Program (L.E.S., M.D., L.B.M.) and Department of Biostatistics (B.N.S), University of Michigan, Ann Arbor; Meyers Primary Care Institute, University of Massachusetts Medical School, Worcester (K.M.M.); and Department of Neurology, Loyola University Chicago, Stritch School of Medicine, Maywood, IL (J.B.)
| | - Brisa N Sánchez
- From the Stroke Program (L.E.S., M.D., L.B.M.) and Department of Biostatistics (B.N.S), University of Michigan, Ann Arbor; Meyers Primary Care Institute, University of Massachusetts Medical School, Worcester (K.M.M.); and Department of Neurology, Loyola University Chicago, Stritch School of Medicine, Maywood, IL (J.B.)
| | - Mackenzie Dome
- From the Stroke Program (L.E.S., M.D., L.B.M.) and Department of Biostatistics (B.N.S), University of Michigan, Ann Arbor; Meyers Primary Care Institute, University of Massachusetts Medical School, Worcester (K.M.M.); and Department of Neurology, Loyola University Chicago, Stritch School of Medicine, Maywood, IL (J.B.)
| | - José Biller
- From the Stroke Program (L.E.S., M.D., L.B.M.) and Department of Biostatistics (B.N.S), University of Michigan, Ann Arbor; Meyers Primary Care Institute, University of Massachusetts Medical School, Worcester (K.M.M.); and Department of Neurology, Loyola University Chicago, Stritch School of Medicine, Maywood, IL (J.B.)
| | - Lewis B Morgenstern
- From the Stroke Program (L.E.S., M.D., L.B.M.) and Department of Biostatistics (B.N.S), University of Michigan, Ann Arbor; Meyers Primary Care Institute, University of Massachusetts Medical School, Worcester (K.M.M.); and Department of Neurology, Loyola University Chicago, Stritch School of Medicine, Maywood, IL (J.B.)
| |
Collapse
|
41
|
Fröhlich M, Kahmann J, Kadmon M. Development and psychometric examination of a German video-based situational judgment test for social competencies in medical school applicants. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2017. [DOI: 10.1111/ijsa.12163] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
| | - Janine Kahmann
- Ruprecht-Karls-University of Heidelberg, Medical Faculty; Im Neuenheimer Feld 155 69120 Heidelberg Germany
| | - Martina Kadmon
- Faculty of Medicine and Health Science; Carl-von-Ossietzky-University of Oldenburg; Oldenburg Germany
| |
Collapse
|
42
|
Social emotional information processing in adults: Development and psychometrics of a computerized video assessment in healthy controls and aggressive individuals. Psychiatry Res 2017; 248:40-47. [PMID: 28012305 PMCID: PMC6178797 DOI: 10.1016/j.psychres.2016.11.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/23/2016] [Revised: 10/25/2016] [Accepted: 11/03/2016] [Indexed: 11/21/2022]
Abstract
A computerized version of an assessment of Social-Emotional Information Processing (SEIP) using audio-video film stimuli instead of written narrative vignettes was developed for use in adult participants. This task allows for an assessment of encoding or relevant/irrelevant social-emotional information, attribution bias, and endorsement of appropriate, physically aggressive, and relationally aggressive responses to aversive social-emotional stimuli. The psychometric properties of this Video-SEIP (V-SEIP) assessment were examined in 75 healthy controls (HC) and in 75 individuals with DSM-5 Intermittent Explosive Disorder (IED) and were also compared with the original questionnaire (SEIP-Q) version of the task (HC=26; IED=26). Internal consistency, inter-rater reliability, and test-retest properties of the V-SEIP were good to excellent. In addition, IED participants displayed reduced encoding of relevant information from the film clips, elevated hostile attribution bias, elevated negative emotional response, and elevated endorsement of physically aggressive and relationally aggressive responses to the ambiguous social-emotional stimuli presented in the V-SEIP. These data indicate that the V-SEIP represents a valid and comprehensive alternative to the paper-and-pencil assessment of social-emotional information processing biases in adults.
Collapse
|
43
|
Anderson R, Thier M, Pitts C. Interpersonal and intrapersonal skill assessment alternatives: Self-reports, situational-judgment tests, and discrete-choice experiments. LEARNING AND INDIVIDUAL DIFFERENCES 2017. [DOI: 10.1016/j.lindif.2016.10.017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
44
|
MacKenzie RK, Cleland JA, Ayansina D, Nicholson S. Does the UKCAT predict performance on exit from medical school? A national cohort study. BMJ Open 2016; 6:e011313. [PMID: 27855088 PMCID: PMC5073508 DOI: 10.1136/bmjopen-2016-011313] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVES Most UK medical programmes use aptitude tests during student selection, but large-scale studies of predictive validity are rare. This study assesses the UK Clinical Aptitude Test (UKCAT: http://www.ukcat.ac.uk), and 4 of its subscales, along with individual and contextual socioeconomic background factors, as predictors of performance during, and on exit from, medical school. METHODS This was an observational study of 6294 medical students from 30 UK medical programmes who took the UKCAT from 2006 to 2008, for whom selection data from the UK Foundation Programme (UKFPO), the next stage of UK medical education training, were available in 2013. We included candidate demographics, UKCAT (cognitive domains; total scores), UKFPO Educational Performance Measure (EPM) and national exit situational judgement test (SJT). Multilevel modelling was used to assess relationships between variables, adjusting for confounders. RESULTS The UKCAT-as a total score and in terms of the subtest scores-has significant predictive validity for performance on the UKFPO EPM and SJT. UKFPO performance was also affected positively by female gender, maturity, white ethnicity and coming from a higher social class area at the time of application to medical school An inverse pattern was seen for a contextual measure of school, with those attending fee-paying schools performing significantly more weakly on the EPM decile, the EPM total and the total UKFPO score, but not the SJT, than those attending other types of school. CONCLUSIONS This large-scale study, the first to link 2 national databases-UKCAT and UKFPO, has shown that UKCAT is a predictor of medical school outcome. The data provide modest supportive evidence for the UKCAT's role in student selection. The conflicting relationships of socioeconomic contextual measures (area and school) with outcome adds to wider debates about the limitations of these measures, and indicates the need for further research.
Collapse
Affiliation(s)
- R K MacKenzie
- Institute of Education for Medical and Dental Sciences, School of Medicine and Dentistry, University of Aberdeen, Aberdeen, UK
| | - J A Cleland
- Institute of Education for Medical and Dental Sciences, School of Medicine and Dentistry, University of Aberdeen, Aberdeen, UK
| | - D Ayansina
- Department of Medical Statistics, College of Life Sciences and Medicine, University of Aberdeen, Aberdeen, UK
| | - S Nicholson
- Centre for Medical Education, Institute of Health Sciences Education, Barts and the London School of Medicine and Dentistry, Queen Mary University of London, London, UK
| |
Collapse
|
45
|
Chan D. The Conceptualization and Analysis of Change Over Time: An Integrative Approach Incorporating Longitudinal Mean and Covariance Structures Analysis (LMACS) and Multiple Indicator Latent Growth Modeling (MLGM). ORGANIZATIONAL RESEARCH METHODS 2016. [DOI: 10.1177/109442819814004] [Citation(s) in RCA: 299] [Impact Index Per Article: 37.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The concept of change over time is fundamental to many phenomena investigated in organizational research. This didactically oriented article proposes an integrative approach incorporating longitudinal mean and covariance structures analysis and multiple indicator latent growth modeling to aid organizational researchers in directly addressing fundamental questions concerning the conceptualization and analysis of change over time. The approach is illustrated using a numerical example involving several organizationally relevant variables. Advantages, limitations, and extensions of the approach are discussed.
Collapse
Affiliation(s)
- David Chan
- Michigan State University, National University of Singapore,
| |
Collapse
|
46
|
Ployhart RE, Oswald FL. Applications of Mean and Covariance Structure Analysis: Integrating Correlational and Experimental Approaches. ORGANIZATIONAL RESEARCH METHODS 2016. [DOI: 10.1177/1094428103259554] [Citation(s) in RCA: 121] [Impact Index Per Article: 15.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This article discusses mean and covariance structure (MACS) analysis as a mechanism for testing individual differences and group mean differences within a single integrated framework. The article is most appropriate for those with a basic understanding of confirmatory factor analysis and structural equation modeling but who have not been introduced to mean structures in either model. The article is both a tutorial and an extension of previous research. It is a tutorial because it introduces the MACS framework using more familiar regression terms, proposes a model-testing framework to be followed, provides numerous illustrations and applications, and discusses both conceptual and programming issues (as well as providing the code for the programming). It is also an extension of previous research because it illustrates several novel applications of MACS to organizational research questions, including how to (a) model mean differences across three independent groups, (b) conduct latent pairwise comparisons, (c) conduct latent contrasts, and (d) analyze repeated measures data. As these examples illustrate, applying MACS more frequently in those organizational research designs that call for it could perhaps improve theory testing by helping to integrate experimental and correlational research methods.
Collapse
|
47
|
Vandenberg RJ, Lance CE. A Review and Synthesis of the Measurement Invariance Literature: Suggestions, Practices, and Recommendations for Organizational Research. ORGANIZATIONAL RESEARCH METHODS 2016. [DOI: 10.1177/109442810031002] [Citation(s) in RCA: 4222] [Impact Index Per Article: 527.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The establishment of measurement invariance across groups is a logical prerequisite to conducting substantive cross-group comparisons (e.g., tests of group mean differences, invariance of structural parameter estimates), but measurement invariance is rarely tested in organizational research. In this article, the authors (a) elaborate the importance of conducting tests of measurement invariance across groups, (b) review recommended practices for conducting tests of measurement invariance, (c) review applications of measurement invariance tests in substantive applications, (d) discuss issues involved in tests of various aspects of measurement invariance, (e) present an empirical example of the analysis of longitudinal measurement invariance, and (f) propose an integrative paradigm for conducting sequences of measurement invariance tests.
Collapse
|
48
|
Meade AW, Kroustalis CM. Problems With Item Parceling for Confirmatory Factor Analytic Tests of Measurement Invariance. ORGANIZATIONAL RESEARCH METHODS 2016. [DOI: 10.1177/1094428105283384] [Citation(s) in RCA: 100] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Combining items into parcels in confirmatory factor analysis (CFA) can improve model estimation and fit. Because adequate model fit is imperative for CFA tests of measurement invariance, parcels have frequently been used. However, the use of parcels as indicators in a CFA model can have serious detrimental effects on tests of measurement invariance. Using simulated data with a known lack of invariance, the authors illustrate how models using parcels as indicator variables erroneously indicate that measurement invariance exists much more often than do models using items as indicators. Moreover, item-by-item tests of measurement invariance were often more informative than were tests of the entire parameter matrices.
Collapse
|
49
|
Jackson DJR, LoPilato AC, Hughes D, Guenole N, Shalfrooshan A. The internal structure of situational judgement tests reflects candidate main effects: Not dimensions or situations. JOURNAL OF OCCUPATIONAL AND ORGANIZATIONAL PSYCHOLOGY 2016. [DOI: 10.1111/joop.12151] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Duncan J. R. Jackson
- Department of Organizational Psychology; Birkbeck, University of London; UK
- Faculty of Management; University of Johannesburg; South Africa
| | | | | | - Nigel Guenole
- Department of Psychology; Goldsmiths, University of London; UK
| | | |
Collapse
|
50
|
Abstract
For many decades, the focus of personnel selection research was on developing selection tests that maximized prediction of job performance; the approach was typically lacking in theoretical bases. The past two decades saw significant shifts in research to a focus on the nature of constructs and their interrelationships, characterized by an approach that emphasizes theoretical understanding of the phenomena under investigation. This article provides an overview of how a construct-oriented approach underlies major current directions in scientific research on personnel selection. Emerging trends that are likely to constitute issues of enduring importance are discussed.
Collapse
|