1
|
Applicant faking warnings: Are they really effective? PERSONALITY AND INDIVIDUAL DIFFERENCES 2023. [DOI: 10.1016/j.paid.2022.111899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
2
|
Bill B, Melchers KG. Thou Shalt not Lie! Exploring and testing countermeasures against faking intentions and faking in selection interviews. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2022. [DOI: 10.1111/ijsa.12402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Benedikt Bill
- Abteilung Arbeits‐ und Organisationspsychologie, Institut für Psychologie und Pädagogik Universität Ulm Ulm Germany
| | - Klaus G. Melchers
- Abteilung Arbeits‐ und Organisationspsychologie, Institut für Psychologie und Pädagogik Universität Ulm Ulm Germany
| |
Collapse
|
3
|
Röhner J, Holden RR. Challenging response latencies in faking detection: The case of few items and no warnings. Behav Res Methods 2022; 54:324-333. [PMID: 34173217 PMCID: PMC8863730 DOI: 10.3758/s13428-021-01636-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/25/2021] [Indexed: 11/09/2022]
Abstract
AbstractFaking detection is an ongoing challenge in psychological assessment. A notable approach for detecting fakers involves the inspection of response latencies and is based on the congruence model of faking. According to this model, respondents who fake good will provide favorable responses (i.e., congruent answers) faster than they provide unfavorable (i.e., incongruent) responses. Although the model has been validated in various experimental faking studies, to date, research supporting the congruence model has focused on scales with large numbers of items. Furthermore, in this previous research, fakers have usually been warned that faking could be detected. In view of the trend to use increasingly shorter scales in assessment, it becomes important to investigate whether the congruence model also applies to self-report measures with small numbers of items. In addition, it is unclear whether warning participants about faking detection is necessary for a successful application of the congruence model. To address these issues, we reanalyzed data sets of two studies that investigated faking good and faking bad on extraversion (n = 255) and need for cognition (n = 146) scales. Reanalyses demonstrated that having only a few items per scale and not warning participants represent a challenge for the congruence model. The congruence model of faking was only partly confirmed under such conditions. Although faking good on extraversion was associated with the expected longer latencies for incongruent answers, all other conditions remained nonsignificant. Thus, properties of the measurement and properties of the procedure affect the successful application of the congruence model.
Collapse
Affiliation(s)
- Jessica Röhner
- Department of Psychology, Otto-Friedrich-Universität Bamberg, D-96045, Bamberg, Germany.
| | - Ronald R Holden
- Department of Psychology, Queen's University, Kingston, Canada
| |
Collapse
|
4
|
Impact of a mid-test warning on the personality-cognitive ability relationship in a field setting. PERSONALITY AND INDIVIDUAL DIFFERENCES 2021. [DOI: 10.1016/j.paid.2020.110452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
5
|
Arthur W, Hagen E, George F. The Lazy or Dishonest Respondent: Detection and Prevention. ANNUAL REVIEW OF ORGANIZATIONAL PSYCHOLOGY AND ORGANIZATIONAL BEHAVIOR 2021. [DOI: 10.1146/annurev-orgpsych-012420-055324] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Self-report measures are characterized as being susceptible to threats associated with deliberate dissimulation or response distortion (i.e., social desirability responding) and careless responding. Careless responding typically arises in low-stakes settings (e.g., participating in a study for course credit) where some respondents are not motivated to respond in a conscientious manner to the items. In contrast, in high-stakes assessments (e.g., prehire assessments), because of the outcomes associated with their responses, respondents are motivated to present themselves in as favorable a light as possible and, thus, may respond dishonestly in an effort to accomplish this objective. In this article, we draw a distinction between the lazy respondent, which we associate with careless responding, and the dishonest respondent, which we associate with response distortion. We then seek to answer the following questions for both careless responding and response distortion: ( a) What is it? ( b) Why is it a problem or concern? ( c) Why do people engage in it? ( d) How pervasive is it? ( e) Can and how is it prevented or mitigated? ( f) How is it detected? ( g) What does one do when one detects it? We conclude with a discussion of suggested future research directions and some practical guidelines for practitioners and researchers.
Collapse
Affiliation(s)
- Winfred Arthur
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, Texas 77843–4235, USA
| | - Ellen Hagen
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, Texas 77843–4235, USA
| | - Felix George
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, Texas 77843–4235, USA
| |
Collapse
|
6
|
Zheng L, Shen Y, Liu J, Zhao G, Hua J, Fan J. Examining the effect of mid-test warnings on factor structure of personality scores: A field experiment. PERSONALITY AND INDIVIDUAL DIFFERENCES 2021. [DOI: 10.1016/j.paid.2020.110323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
7
|
Abstract
Abstract. Many investigators have noted “reverse-coding” method factors when exploring response pattern structure with psychological inventory data. The current article probes for the existence of a confound in these investigations, whereby an item’s level of saturation with socially desirable content tends to covary with the item’s substantive scale keying. We first investigate its existence, demonstrating that 15 of 16 measures that have been previously implicated as exhibiting a reverse-scoring method effect can also be reasonably characterized as exhibiting a scoring key/social desirability confound. A second set of analyses targets the extent to which the confounding variable may confuse interpretation of factor analytic results and documents strong social desirability associations. The results suggest that assessment developers perhaps consider the social desirability scale value of indicators when constructing scale aggregates (and possibly scales when investigating inter-construct associations). Future investigations would ideally disentangle the confound via experimental manipulation.
Collapse
Affiliation(s)
- John T. Kulas
- Department of Psychology, Saint Cloud State University, MN, USA
| | - Rachael Klahr
- Department of Psychology, Saint Cloud State University, MN, USA
| | - Lindsey Knights
- Department of Psychology, Saint Cloud State University, MN, USA
| |
Collapse
|
8
|
Lopez FJ, Hou N, Fan J. Reducing faking on personality tests: Testing a new faking‐mitigation procedure in a U.S. job applicant sample. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2019. [DOI: 10.1111/ijsa.12265] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
| | - Ning Hou
- Department of Management and Entrepreneurship St. Cloud State University St. Cloud Minnesota
| | - Jinyan Fan
- Department of Psychology Auburn University Auburn Alabama
| |
Collapse
|
9
|
Niessen ASM, Meijer RR. On the Use of Broadened Admission Criteria in Higher Education. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2018; 12:436-448. [PMID: 28544866 DOI: 10.1177/1745691616683050] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
There is an increasing interest in the use of broadened criteria for admission to higher education, often assessed through noncognitive instruments. We argue that there are several reasons why, despite some significant progress, the use of noncognitive predictors to select students is problematic in high-stakes educational selection and why the incremental validity will often be modest, even when studied in low-stakes contexts. Furthermore, we comment on the use of broadened admission criteria in relation to reducing adverse impact of testing on some groups, and we extend the literature by discussing an approach based on behavioral sampling, which showed promising results in Europe. Finally, we provide some suggestions for future research.
Collapse
Affiliation(s)
- A Susan M Niessen
- Department of Psychometrics and Statistics, University of Groningen, The Netherlands
| | - Rob R Meijer
- Department of Psychometrics and Statistics, University of Groningen, The Netherlands
| |
Collapse
|
10
|
Visser R, Schaap P. Job applicants’ attitudes towards cognitive ability and personality testing. SOUTH AFRICAN JOURNAL OF HUMAN RESOURCE MANAGEMENT 2017. [DOI: 10.4102/sajhrm.v15i0.877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022] Open
Abstract
Orientation: Growing research has shown that not only test validity considerations but also the test-taking attitudes of job applicants are important in the choice of selection instruments as these can contribute to test performance and the perceived fairness of the selection process.Research purpose: The main purpose of this study was to determine the test-taking attitudes of a diverse group of job applicants towards personality and cognitive ability tests administered conjointly online as part of employee selection in a financial services company in South Africa.Motivation for the study: If users understand how job applicants view specific test types, they will know which assessments are perceived more negatively and how this situation can potentially be rectified.Research design, approach and method: A non-experimental and cross-sectional survey design was used. An adapted version of the Test Attitude Survey was used to determine job applicants’ attitudes towards tests administered online as part of an employee selection process. The sample consisted of a group of job applicants (N = 160) who were diverse in terms of ethnicity and age and the educational level applicable for sales and supervisory positions.Main findings: On average, the job applicants responded equally positively to the cognitive ability and personality tests. The African job applicants had a statistically significantly more positive attitude towards the tests than the other groups, and candidates applying for the sales position viewed the cognitive ability tests significantly less positively than the personality test.Practical and managerial implications: The choice of selection tests used in combination as well as the testing conditions that are applicable should be considered carefully as they are the factors that can potentially influence the test-taking motivation and general test-taking attitudes of job applicants.Contribution: This study consolidated the research findings on the determinants of attitudinal responses to cognitive ability and personality testing and produced valuable empirical findings on job applicants’ attitudes towards both test types when administered conjointly
Collapse
|
11
|
König CJ, Steiner Thommen LA, Wittwer AM, Kleinmann M. Are observer ratings of applicants’ personality also faked? Yes, but less than self-reports. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2017. [DOI: 10.1111/ijsa.12171] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Cornelius J. König
- Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany
| | | | | | - Martin Kleinmann
- Psychologisches Institut, Universität Zürich, Zürich, Switzerland
| |
Collapse
|
12
|
Niessen ASM, Meijer RR, Tendeiro JN. Measuring non-cognitive predictors in high-stakes contexts: The effect of self-presentation on self-report instruments used in admission to higher education. PERSONALITY AND INDIVIDUAL DIFFERENCES 2017. [DOI: 10.1016/j.paid.2016.11.014] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
13
|
König CJ, Mura M, Schmidt J. APPLICANTS' STRATEGIC USE OF EXTREME OR MIDPOINT RESPONSES WHEN FAKING PERSONALITY TESTS. Psychol Rep 2015; 117:429-36. [PMID: 26444843 DOI: 10.2466/03.02.pr0.117c21z2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Faking, the intentional distortion of answers to personality tests, is likely a complex process. In particular, participants in previous research have mentioned that they used different kind of strategies to appear more hirable, including systematically more extreme or more midpoint responses. However, quantitative evidence is still lacking. An experiment was conducted in which 327 students (173 women, 153 men, 1 not indicated; M age = 22.1 yr., SD = 2.8) were randomly assigned to two groups. Hypothetical job advertisements primed the participants into believing that the hiring company preferred a person with either a "strong" (Strong Character group) or a "well-balanced" character (Well-balanced Character group). The participants filled out 40 items that were chosen from four established questionnaires as neither socially desirable nor undesirable. The responses to these items were used to calculate two extreme response measures and one midpoint response measure. The Strong Character group used extreme scores more often than the Well-balanced Character group (and the midpoint scores less often), independently of mean differences. This suggests that fakers use more sophisticated strategies than is often assumed.
Collapse
|