1
|
Schroeder RW, Spector J, Snodgrass M, Bieu RK. Validation of PCL-5 symptom validity indices in a Cross-Cultural Forensic Sample. J Clin Exp Neuropsychol 2025; 47:117-127. [PMID: 40122055 DOI: 10.1080/13803395.2025.2482650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Accepted: 03/17/2025] [Indexed: 03/25/2025]
Abstract
INTRODUCTION Three symptom validity indices have recently been developed for the PTSD Checklist for DSM-5 (PCL-5). To date, these validity indices have been examined in North American research and clinical samples, generally with promising results. The current study aimed to cross-validate the symptom validity indices in a cross-cultural forensic sample. METHOD Examinees (N = 79) were Balkan (Macedonian, Kosovar, and Serbian) contractors previously employed at United States military bases in Afghanistan and Iraq. Examinees claimed posttraumatic stress disorder (PTSD) secondary to alleged adverse experiences, and they were pursuing Federal Workers' Compensation claims for PTSD under the auspices of the Defense Base Act. In this study, validity status was determined via outcome on the Inventory of Problems-29. RESULTS There were no significant differences between most demographic and background variables when groups were divided by validity status. Conversely, scores on all validity tests were significantly different between the group of examinees who were likely credibly presenting and the group that was likely noncredibly responding; medium to large effect sizes were observed. Area under the curve statistics ranged from .73 to .77. Sensitivity rates ranged from .33 to .47 when specificity was held at .90 or higher. CONCLUSIONS The findings converge well with prior research results, extending the use of PCL-5 symptom validity indices to a cross-cultural forensic sample.
Collapse
Affiliation(s)
- Ryan W Schroeder
- Behavioral Health, Robert J. Dole VA Medical Center, Wichita, KS, USA
| | | | - Makenna Snodgrass
- Behavioral Health, Robert J. Dole VA Medical Center, Wichita, KS, USA
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Rachel K Bieu
- Behavioral Health, Robert J. Dole VA Medical Center, Wichita, KS, USA
| |
Collapse
|
2
|
Finley JCA, Erdodi LA, Parks TN, Block C, Loring DW, Goldstein FC. Incorrect encoding responses improve the classification accuracy of the Word Choice Test. APPLIED NEUROPSYCHOLOGY. ADULT 2025:1-8. [PMID: 40084559 DOI: 10.1080/23279095.2025.2479850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/16/2025]
Abstract
This study investigated whether responses from the Word Choice Test (WCT) encoding trial could provide a supplemental index of performance validity in addition to the traditional Summary score. Participants were 196 adult outpatients who underwent neuropsychological evaluations for various referral reasons related to, but not limited to epilepsy, stroke, and age-related cognitive decline. Participants were classified into valid or invalid performance groups using a criterion-grouping approach based on multiple independent performance validity tests. We derived a supplemental validity indicator, entitled the "Encoding" score, based on the number of correct responses from 43 items on the initial WCT trial, which were identified via critical item analysis. Using cutoffs of ≤40 for the Encoding score and ≤42 for the Summary score together enhanced classification accuracy, yielding an area under the curve of .83. Compared to using the WCT Summary score alone, the combined use of the Encoding and Summary scores increased the sensitivity by .10 to yield a total sensitivity of .58, while maintaining high (.92) specificity. Findings suggest the WCT Encoding score may provide a useful index of performance validity alongside the Summary score. Employing these indicators together can optimize the WCT without adding cost or much time to the evaluation.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Taylor N Parks
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| | - Cady Block
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| | - David W Loring
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
- Department of Pediatrics, Emory University School of Medicine, Atlanta, GA, USA
| | - Felicia C Goldstein
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
3
|
Tyson BT, Pyne SR, Crisan I, Calamia M, Holcomb M, Giromini L, Erdodi LA. Logical memory, visual reproduction, and verbal paired associates are effective embedded validity indicators in patients with traumatic brain injury. APPLIED NEUROPSYCHOLOGY. ADULT 2025; 32:450-459. [PMID: 36881969 DOI: 10.1080/23279095.2023.2179400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
OBJECTIVE This study was design to evaluate the potential of the recognition trials for the Logical Memory (LM), Visual Reproduction (VR), and Verbal Paired Associates (VPA) subtests of the Wechsler Memory Scales-Fourth Edition (WMS-IV) to serve as embedded performance validity tests (PVTs). METHOD The classification accuracy of the three WMS-IV subtests was computed against three different criterion PVTs in a sample of 103 adults with traumatic brain injury (TBI). RESULTS The optimal cutoffs (LM ≤ 20, VR ≤ 3, VPA ≤ 36) produced good combinations of sensitivity (.33-.87) and specificity (.92-.98). An age-corrected scaled score of ≤5 on either of the free recall trials on the VPA was specific (.91-.92) and relatively sensitive (.48-.57) to psychometrically defined invalid performance. A VR I ≤ 5 or VR II ≤ 4 had comparable specificity, but lower sensitivity (.25-.42). There was no difference in failure rate as a function of TBI severity. CONCLUSIONS In addition to LM, VR, and VPA can also function as embedded PVTs. Failing validity cutoffs on these subtests signals an increased risk of non-credible presentation and is robust to genuine neurocognitive impairment. However, they should not be used in isolation to determine the validity of an overall neurocognitive profile.
Collapse
Affiliation(s)
- Brad T Tyson
- Evergreen Neuroscience Institute, Evergreen Health Medical Center, Kirkland, WA, USA
| | | | - Iulia Crisan
- Department of Psychology, West University of Timisoara, Timisoara, Romania
| | - Matthew Calamia
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | | | | | - Laszlo A Erdodi
- Jefferson Neurobehavioral Group, New Orleans, LA, USA
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
4
|
Bosi J, Minassian L, Ales F, Akca AYE, Winters C, Viglione DJ, Zennaro A, Giromini L. The sensitivity of the IOP-29 and IOP-M to coached feigning of depression and mTBI: An online simulation study in a community sample from the United Kingdom. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:1234-1246. [PMID: 36027614 DOI: 10.1080/23279095.2022.2115910] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Assessing the credibility of symptoms is critical to neuropsychological assessment in both clinical and forensic settings. To this end, the Inventory of Problems-29 (IOP-29) and its recently added memory module (Inventory of Problems-Memory; IOP-M) appear to be particularly useful, as they provide a rapid and cost-effective measure of both symptom and performance validity. While numerous studies have already supported the effectiveness of the IOP-29, research on its newly developed module, the IOP-M, is much sparser. To address this gap, we conducted a simulation study with a community sample (N = 307) from the United Kingdom. Participants were asked to either (a) respond honestly or (b) pretend to suffer from mTBI or (c) pretend to suffer from depression. Within each feigning group, half of the participants received a description of the symptoms of the disorder to be feigned, and the other half received both a description of the symptoms of the disorder to be feigned and a warning not to over-exaggerate their responses or their presentation would not be credible. Overall, the results confirmed the effectiveness of the two IOP components, both individually and in combination.
Collapse
Affiliation(s)
- Jessica Bosi
- Department of Psychology, University of Surrey, Guildford, UK
| | - Laure Minassian
- Department of Psychology, University of Surrey, Guildford, UK
| | - Francesca Ales
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Christina Winters
- Tilburg Institute for Law, Technology, and Society (TLS), Tilburg University, Tilburg, The Netherlands
| | | | | | | |
Collapse
|
5
|
Boskovic I, Akca AYE. Presenting the consequences of feigning: Does it diminish symptom overendorsement? An analog study. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:575-584. [PMID: 35287519 DOI: 10.1080/23279095.2022.2044329] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Feigning causes personal and societal consequences, in both civil and criminal context. We investigated whether presenting the consequences of feigning can diminish symptom endorsement in feigned Posttraumatic Stress Disorder (PTSD). We randomly allocated non-native English speaking undergraduates (N = 145) to five conditions: 1) Truth tellers (n = 31), 2) Civil context feigners (n = 27), 3) Civil context warned feigners (n = 26), 4) Criminal context feigners (n = 29), and 5) Criminal context warned feigners (n = 32). All feigning groups received a vignette depicting a situation in which claiming PTSD would be beneficial. One vignette referred to the personal injury claim, whereas the second was about the aggravated assault charges. Additionally, one feigning group from each setting received information about the consequences of feigning (i.e., warned feigners). After receiving the instructions, all participants were administered the Self-Report Symptom Inventory (SRSI), a measure of symptom endorsement. Truth tellers endorsed fewer symptoms than all feigning groups, which mostly did not differ. Yet, criminal warned feigners (59%) were significantly less frequently detected on the SRSI as overreporters than other feigning groups (86.2%-89%). Hence, emphasizing the negative consequences of overreporting may diminish symptom endorsement, but only in high-stake situations. The implications and limitations (e.g., online measure administration) of this work are discussed.
Collapse
Affiliation(s)
- Irena Boskovic
- Erasmus University Rotterdam, Rotterdam, the Netherlands
- Maastricht University, Maastricht, the Netherlands
| | | |
Collapse
|
6
|
Boskovic I, Akca AYE, Giromini L. Symptom coaching and symptom validity tests: An analog study using the structured inventory of malingered symptomatology, Self-Report Symptom Inventory, and Inventory of Problems-29. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:626-638. [PMID: 35414324 DOI: 10.1080/23279095.2022.2057856] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In this pilot and exploratory study, we tested the robustness of three self-report symptom validity tests (SVTs) to symptom coaching for depression, with and without additional information available on the Internet. Specifically, we divided our sample (N = 193) so that each subject received either the Structured Inventory of Malingered Symptomatology (SIMS; n = 64), the Self-Report Symptom Inventory (SRSI; n = 66), or the Inventory of Problems-29 (IOP-29; n = 63). Within each of the three subgroups, approximately one third of participants were instructed to respond honestly (Genuine Condition, nSIMS = 21; nSRSI = 24; nIOP-29 = 26) and approximately two-thirds were instructed to feign depression. One half of the feigners were presented with a vignette to increase their compliance with instructions and were given information about symptoms of depression (Coached Feigning, nSIMS = 25; nSRSI = 18; nIOP-29 = 21), and the other half were given the same vignette and information about symptoms of depression, plus two Internet links to review before completing the test (Internet-Coached Feigning, nSIMS = 18; nSRSI = 24; nIOP-29 = 16). Overall, the results showed that the genuine conditions yielded the lowest total scores on all three measures, while the two feigning conditions did not significantly differ from each other. Looking at the detection rates for all feigning participants, all three measures showed satisfactory results, with IOP-29 performing slightly better than SIMS and SIMS performing slightly better than SRSI. Internet-Coached Feigners scored slightly lower on all three measures than feigners who were coached without the Internet links. Taken together, the results of this preliminary and exploratory study suggest that all three SVTs examined are sensitive to feigned depression even in the presence of symptom coaching, both with and without additional Internet-based information.
Collapse
Affiliation(s)
- Irena Boskovic
- Forensic Psychology Section, Clinical Psychology Department, Erasmus School of Social and Behavioral Sciences, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Forensic Psychology Section, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | | | | |
Collapse
|
7
|
Tyson BT, Shahein A, Abeare CA, Baker SD, Kent K, Roth RM, Erdodi LA. Replicating a Meta-Analysis: The Search for the Optimal Word Choice Test Cutoff Continues. Assessment 2023; 30:2476-2490. [PMID: 36752050 DOI: 10.1177/10731911221147043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.
Collapse
Affiliation(s)
| | | | | | | | | | - Robert M Roth
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | |
Collapse
|
8
|
Pignolo C, Giromini L, Ales F, Zennaro A. Detection of Feigning of Different Symptom Presentations With the PAI and IOP-29. Assessment 2023; 30:565-579. [PMID: 34872384 DOI: 10.1177/10731911211061282] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study examined the effectiveness of the negative distortion measures from the Personality Assessment Inventory (PAI) and Inventory of Problems-29 (IOP-29), by investigating data from a community and a forensic sample, across three different symptom presentations (i.e., feigned depression, posttraumatic stress disorder [PTSD], and schizophrenia). The final sample consisted of 513 community-based individuals and 288 inmates (total N = 801); all were administered the PAI and the IOP-29 in an honest or feigning conditions. Statistical analyses compared the average scores of each measure by symptom presentation and data source (i.e., community vs. forensic sample) and evaluated diagnostic efficiency statistics. Results suggest that the PAI Negative Impression Management scale and the IOP-29 are the most effective measures across all symptom presentations, whereas the PAI Malingering Index and Rogers Discriminant Function generated less optimal results, especially when considering feigned PTSD. Practical implications are discussed.
Collapse
|
9
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
10
|
Edwards MJ, Yogarajah M, Stone J. Why functional neurological disorder is not feigning or malingering. Nat Rev Neurol 2023; 19:246-256. [PMID: 36797425 DOI: 10.1038/s41582-022-00765-z] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/15/2022] [Indexed: 02/18/2023]
Abstract
Functional neurological disorder (FND) is one of the commonest reasons that people seek help from a neurologist and is for many people a lifelong cause of disability and impaired quality of life. Although the evidence base regarding FND pathophysiology, treatment and service development has grown substantially in recent years, a persistent ambivalence remains amongst health professionals and others as to the veracity of symptom reporting in those with FND and whether the symptoms are not, in the end, just the same as feigned symptoms or malingering. Here, we provide our perspective on the range of evidence available, which in our view provides a clear separation between FND and feigning and malingering. We hope this will provide a further important step forward in the clinical and academic approach to people with FND, leading to improved attitudes, knowledge, treatments, care pathways and outcomes.
Collapse
Affiliation(s)
- Mark J Edwards
- Institute of Psychiatry, Psychology and Neuroscience, Kings College London, London, UK.
| | - Mahinda Yogarajah
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, University College London, London, UK.,National Hospital for Neurology and Neurosurgery, University College London Hospitals, London, UK.,Epilepsy Society, London, UK
| | - Jon Stone
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
11
|
Holcomb M, Pyne S, Cutler L, Oikle DA, Erdodi LA. Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity. J Pers Assess 2022:1-11. [PMID: 36041087 DOI: 10.1080/00223891.2022.2114358] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.
Collapse
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor
| |
Collapse
|
12
|
Uiterwijk D, Stargatt R, Crowe SF. Objective Cognitive Outcomes and Subjective Emotional Sequelae in Litigating Adults with a Traumatic Brain Injury: The Impact of Performance and Symptom Validity Measures. Arch Clin Neuropsychol 2022; 37:1662-1687. [PMID: 35704852 DOI: 10.1093/arclin/acac039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/13/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the relative contribution of performance and symptom validity in litigating adults with traumatic brain injury (TBI), as a function of TBI severity, and examined the relationship between self-reported emotional symptoms and cognitive tests scores while controlling for validity test performance. METHOD Participants underwent neuropsychological assessment between January 2012 and June 2021 in the context of compensation-seeking claims related to a TBI. All participants completed a cognitive test battery, the Personality Assessment Inventory (including symptom validity tests; SVTs), and multiple performance validity tests (PVTs). Data analyses included independent t-tests, one-way ANOVAs, correlation analyses, and hierarchical multiple regression. RESULTS A total of 370 participants were included. Atypical PVT and SVT performance were associated with poorer cognitive test performance and higher emotional symptom report, irrespective of TBI severity. PVTs and SVTs had an additive effect on cognitive test performance for uncomplicated mTBI, but less so for more severe TBI. The relationship between emotional symptoms and cognitive test performance diminished substantially when validity test performance was controlled, and validity test performance had a substantially larger impact than emotional symptoms on cognitive test performance. CONCLUSION Validity test performance has a significant impact on the neuropsychological profiles of people with TBI, irrespective of TBI severity, and plays a significant role in the relationship between emotional symptoms and cognitive test performance. Adequate validity testing should be incorporated into every neuropsychological assessment, and associations between emotional symptoms and cognitive outcomes that do not consider validity testing should be interpreted with extreme caution.
Collapse
Affiliation(s)
- Daniel Uiterwijk
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Robyn Stargatt
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Simon F Crowe
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| |
Collapse
|
13
|
Giromini L, Pasqualini S, Corgiat Loia A, Pignolo C, Di Girolamo M, Zennaro A. A Survey of Practices and Beliefs of Italian Psychologists Regarding Malingering and Symptom Validity Assessment. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09452-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractA few years ago, an article describing the current status of Symptom Validity Assessment (SVA) practices and beliefs in European countries reported that there was little research activity in Italy (Merten et al., 2013). The same article also highlighted that Italian practitioners were less inclined to use Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in their assessments, compared with their colleagues from other major European countries. Considering that several articles on malingering and SVA have been published by Italian authors in recent years, we concluded that an update of the practices and beliefs of Italian professionals regarding malingering and SVA would be beneficial. Accordingly, from a larger survey that examined general psychological assessment practices and beliefs of Italian professionals, we extracted a subset of items specifically related to malingering and SVA and analyzed the responses of a sample of Italian psychologists who have some experience with malingering-related assessments. Taken together, the results of our analyses indicated that even though our respondents tend to use SVTs and PVTs relatively often in their evaluations, at this time, they likely trust more their own personal observations, impressions, and overall clinical judgment, in their SVA practice. Additionally, our results also indicated that Italian practitioners with some familiarity with malingering-related evaluations consider malingering to occur in about one-third of psychological assessments in which the evaluee might have an interest in overreporting.
Collapse
|
14
|
Abstract
AbstractAre personality traits related to symptom overreporting and/or symptom underreporting? With this question in mind, we evaluated studies from 1979 to 2020 (k = 55), in which personality traits were linked to scores on stand-alone validity tests, including symptom validity tests (SVTs) and measures of socially desirable responding (SDR) and/or supernormality. As to symptom overreporting (k = 14), associations with depression, alexithymia, apathy, dissociation, and fantasy proneness varied widely from weak to strong (rs .27 to .79). For underreporting (k = 41), inconsistent links (rs − .43 to .63) were found with narcissism, whereas alexithymia and dissociation were often associated with lower SDR tendencies, although effect sizes were small. Taken together, the extant literature mainly consists of cross-sectional studies on single traits and contexts, mostly offering weak correlations that do not necessarily reflect causation. What this field lacks is an overarching theory relating traits to symptom reporting. Longitudinal studies involving a broad range of traits, samples, and incentives would be informative. Until such studies have been done, traits are best viewed as modest concomitants of symptom distortion.
Collapse
|
15
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
16
|
Giromini L, Viglione DJ. Assessing Negative Response Bias with the Inventory of Problems-29 (IOP-29): a Quantitative Literature Review. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-021-09437-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
17
|
Examining Base Rates of Symptom Endorsement and the Roles of Sex and Depressive Symptoms on the Structured Inventory of Malingered Symptomology (SIMS) in a Non-clinical Population. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-021-09439-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
18
|
Monaro M, Bertomeu CB, Zecchinato F, Fietta V, Sartori G, De Rosario Martínez H. The detection of malingering in whiplash-related injuries: a targeted literature review of the available strategies. Int J Legal Med 2021; 135:2017-2032. [PMID: 33829284 PMCID: PMC8354940 DOI: 10.1007/s00414-021-02589-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 03/26/2021] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The present review is intended to provide an up-to-date overview of the strategies available to detect malingered symptoms following whiplash. Whiplash-associated disorders (WADs) represent the most common traffic injuries, having a major impact on economic and healthcare systems worldwide. Heterogeneous symptoms that may arise following whiplash injuries are difficult to objectify and are normally determined based on self-reported complaints. These elements, together with the litigation context, make fraudulent claims particularly likely. Crucially, at present, there is no clear evidence of the instruments available to detect malingered WADs. METHODS We conducted a targeted literature review of the methodologies adopted to detect malingered WADs. Relevant studies were identified via Medline (PubMed) and Scopus databases published up to September 2020. RESULTS Twenty-two methodologies are included in the review, grouped into biomechanical techniques, clinical tools applied to forensic settings, and cognitive-based lie detection techniques. Strengths and weaknesses of each methodology are presented, and future directions are discussed. CONCLUSIONS Despite the variety of techniques that have been developed to identify malingering in forensic contexts, the present work highlights the current lack of rigorous methodologies for the assessment of WADs that take into account both the heterogeneous nature of the syndrome and the possibility of malingering. We conclude that it is pivotal to promote awareness about the presence of malingering in whiplash cases and highlight the need for novel, high-quality research in this field, with the potential to contribute to the development of standardised procedures for the evaluation of WADs and the detection of malingering.
Collapse
Affiliation(s)
- Merylin Monaro
- Department of General Psychology, Università degli Studi di Padova, via Venezia 8, 35131, Padova, Italy.
| | - Chema Baydal Bertomeu
- Instituto de Biomecánica de Valencia, Universitat Politècnica de Valencia, Ed. 9C. Camino de Vera s/n, 46022, Valencia, Spain
| | - Francesca Zecchinato
- Department of General Psychology, Università degli Studi di Padova, via Venezia 8, 35131, Padova, Italy
| | - Valentina Fietta
- Department of General Psychology, Università degli Studi di Padova, via Venezia 8, 35131, Padova, Italy
| | - Giuseppe Sartori
- Department of General Psychology, Università degli Studi di Padova, via Venezia 8, 35131, Padova, Italy
| | - Helios De Rosario Martínez
- Instituto de Biomecánica de Valencia, Universitat Politècnica de Valencia, Ed. 9C. Camino de Vera s/n, 46022, Valencia, Spain
- CIBER de Bioingeniería, Biomateriales Y Nanomedicina (CIBER-BBN), Zaragoza, Spain
| |
Collapse
|
19
|
Lace JW, Merz ZC, Galioto R. Nonmemory Composite Embedded Performance Validity Formulas in Patients with Multiple Sclerosis. Arch Clin Neuropsychol 2021; 37:309-321. [PMID: 34467368 DOI: 10.1093/arclin/acab066] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Research regarding performance validity tests (PVTs) in patients with multiple sclerosis (MS) is scant, with recommended batteries for neuropsychological evaluations in this population lacking suggestions to include PVTs. Moreover, limited work has examined embedded PVTs in this population. As previous investigations indicated that nonmemory-based embedded PVTs provide clinical utility in other populations, this study sought to determine if a logistic regression-derived PVT formula can be identified from selected nonmemory variables in a sample of patients with MS. METHOD A total of 184 patients (M age = 48.45; 76.6% female) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into "credible" (n = 146) or "noncredible" (n = 38) groups according to performance on standalone PVT. Missing data were imputed with HOTDECK. RESULTS Classification statistics for a variety of embedded PVTs were examined, with none appearing psychometrically appropriate in isolation (areas under the curve [AUCs] = .48-.64). Four exponentiated equations were created via logistic regression. Six, five, and three predictor equations yielded acceptable discriminability (AUC = .71-.74) with modest sensitivity (.34-.39) while maintaining good specificity (≥.90). The two predictor equation appeared unacceptable (AUC = .67). CONCLUSIONS Results suggest that multivariate combinations of embedded PVTs may provide some clinical utility while minimizing test burden in determining performance validity in patients with MS. Nonetheless, the authors recommend routine inclusion of several PVTs and utilization of comprehensive clinical judgment to maximize signal detection of noncredible performance and avoid incorrect conclusions. Clinical implications, limitations, and avenues for future research are discussed.
Collapse
Affiliation(s)
- John W Lace
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA
| | - Zachary C Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC, USA
| | - Rachel Galioto
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA.,Mellen Center for Multiple Sclerosis, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
20
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
21
|
The Multi-Level Pattern Memory Test (MPMT): Initial Validation of a Novel Performance Validity Test. Brain Sci 2021; 11:brainsci11081039. [PMID: 34439658 PMCID: PMC8393330 DOI: 10.3390/brainsci11081039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/30/2021] [Accepted: 08/01/2021] [Indexed: 11/16/2022] Open
Abstract
Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants' performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering-TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators' objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.
Collapse
|
22
|
The Call for Aid (Cry for Help) in Psychological Injury and Law: Reinterpretation, Mechanisms, and a Call for Research. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09414-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
23
|
Rogers R, Otal TK, Velsor SF, Pan M. How good are inpatients at feigning Miranda abilities?: An investigation of the Miranda Quiz, Inventory of Legal Knowledge, and Structured Inventory of Malingered Symptomatology. BEHAVIORAL SCIENCES & THE LAW 2021; 39:245-261. [PMID: 33851430 DOI: 10.1002/bsl.2506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Revised: 01/06/2021] [Accepted: 02/06/2021] [Indexed: 06/12/2023]
Abstract
The current study represents the first investigation into feigned Miranda abilities using an inpatient population. We investigated the use of a very generic measure (i.e., the Structured Inventory of Malingered Symptomatology, or SIMS) as well as two specialized forensic feigning measures: the Miranda Quiz (MQ) and Inventory of Legal Knowledge (ILK). With a quasi-random assignment, 82 acute inpatients were evenly distributed to "feigning" and "genuine" groups. The recommended SIMS cut score > 14 performed poorly, misclassifying three-quarters of the genuine group as feigning. In general, sensitivities on the specialized scales were constrained by the general lack of severe decrements for the feigning group. However, specificities were strong to outstanding. In particular, the MQ floor effect showed some promise but was limited by its small number of items. The strongest potential was observed for the revised ILK scales, especially the Revised Clinical ILK (RC-ILK). When using single-point cut scores on two prior correctional samples, the RC-ILK produced excellent sensitivities (0.94 and 0.96) and outstanding specificities (0.98 and 0.99). Methodological issues and professional implications were discussed in the context of feigned Miranda abilities.
Collapse
Affiliation(s)
- Richard Rogers
- Department of Psychology, University of North Texas, Denton, Texas, USA
| | - Tanveer K Otal
- Department of Psychology, University of North Texas, Denton, Texas, USA
| | - Sarah F Velsor
- Department of Psychology, University of North Texas, Denton, Texas, USA
| | - Minqi Pan
- Department of Psychology, University of North Texas, Denton, Texas, USA
| |
Collapse
|
24
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
25
|
Šömen MM, Lesjak S, Majaron T, Lavopa L, Giromini L, Viglione D, Podlesek A. Using the Inventory of Problems-29 (IOP-29) with the Inventory of Problems Memory (IOP-M) in Malingering-Related Assessments: a Study with a Slovenian Sample of Experimental Feigners. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09412-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
26
|
Abeare K, Razvi P, Sirianni CD, Giromini L, Holcomb M, Cutler L, Kuzmenka P, Erdodi LA. Introducing Alternative Validity Cutoffs to Improve the Detection of Non-credible Symptom Report on the BRIEF. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09402-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
27
|
Boskovic I, Zwaan L, Baillie V, Merckelbach H. Consistency does not aid detection of feigned symptoms, overreporting does: Two explorative studies on symptom stability among truth tellers and feigners. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1458-1466. [PMID: 33761304 DOI: 10.1080/23279095.2021.1888728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Practitioners always want to exclude the possibility that a patient is feigning symptoms. Some experts have suggested that an inconsistent symptom presentation across time (i.e., intraindividual variability) is indicative of feigning. We investigated how individuals with genuine pain-related symptoms (truth tellers; Study 1 n = 32; Study 2 n = 48) and people feigning such complaints (feigners; Study 1 n = 32; Study 2 n = 28) rated the intensity of their symptoms across a 5-day period. In both studies, feigners reported on all 5 days significantly higher symptom intensities than people with genuine complaints, but the two groups did not differ with regard to symptom (in)consistency. Thus, persistently inflated, rather than inconsistent, reports of symptom intensity over time are suggestive of feigning. The implications and limitations of our work are discussed.
Collapse
Affiliation(s)
- Irena Boskovic
- Faculty of Psychology and Neuroscience, Forensic Psychology section, Maastricht University, Maastricht, The Netherlands.,Erasmus School of Social and Behavioural Sciences, Clinical Psychology department, Forensic Psychology section, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Lisette Zwaan
- Faculty of Psychology and Neuroscience, Forensic Psychology section, Maastricht University, Maastricht, The Netherlands.,Erasmus School of Social and Behavioural Sciences, Clinical Psychology department, Forensic Psychology section, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Victoria Baillie
- Faculty of Psychology and Neuroscience, Forensic Psychology section, Maastricht University, Maastricht, The Netherlands
| | - Harald Merckelbach
- Faculty of Psychology and Neuroscience, Forensic Psychology section, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
28
|
Goldenson J, Josefowitz N. Remote Forensic Psychological Assessment in Civil Cases: Considerations for Experts Assessing Harms from Early Life Abuse. PSYCHOLOGICAL INJURY & LAW 2021; 14:89-103. [PMID: 33758640 PMCID: PMC7970781 DOI: 10.1007/s12207-021-09404-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 02/21/2021] [Indexed: 01/12/2023]
Abstract
The COVID-19 pandemic has brought to the fore the question of whether psycho-legal assessments can be executed remotely in a manner that adheres to the rigorous standards applied during in-person assessments. General guidelines have evolved, but to date, there are no explicit directives about whether and how to proceed. This paper reviews professional, ethical, and legal challenges that experts should consider before conducting such an evaluation remotely. Although the discussion is more widely applicable, remote forensic psychological assessment of adults alleging childhood abuse is used as an example throughout, due to the complexity of these cases, the ethical dilemmas they can present, and the need to carefully assess non-verbal trauma-related symptoms. The use of videoconferencing technology is considered in terms of potential benefits of this medium, as well as challenges this method could pose to aspects of interviewing and psychometric testing. The global pandemic is also considered with respect to its effects on functioning and mental health and the confounding impact such a crisis has on assessing the relationship between childhood abuse and current psychological functioning. Finally, for those evaluators who want to engage in remote assessment, practice considerations are discussed.
Collapse
Affiliation(s)
- Julie Goldenson
- Department of Applied Psychology and Human Behaviour, Institute for Studies in Education, University of Toronto, 252 Bloor Street West, Toronto, ON M5S1V6 Canada
| | - Nina Josefowitz
- Department of Applied Psychology and Human Behaviour, Institute for Studies in Education, University of Toronto, 252 Bloor Street West, Toronto, ON M5S1V6 Canada
| |
Collapse
|
29
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
30
|
Abstract
Cognitive and psychoemotional impairment after traumatic brain injury is often underrated, especially after mild injury. Even subtle problems can considerably interfere with routine functioning. They require precise psychotherapeutic diagnostics and adequate neuropsychological treatment. Early detection and documentation of the initial symptoms and initiation of further steps are mandatory, also and particularly during first-line surgical management.
Collapse
Affiliation(s)
- Ludwig Linsl
- Psychotraumatologie und Neuropsychologie, BG Unfallklinik Murnau, Prof.-Küntscher-Str. 8, 82418, Murnau, Deutschland.
| |
Collapse
|
31
|
|
32
|
The Eggshell and Crumbling Skull Plaintiff: Psychological and Legal Considerations for Assessment. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09392-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
33
|
Foote WE, Goodman-Delahunty J, Young G. Civil Forensic Evaluation in Psychological Injury and Law: Legal, Professional, and Ethical Considerations. PSYCHOLOGICAL INJURY & LAW 2020; 13:327-353. [PMID: 33250954 PMCID: PMC7683260 DOI: 10.1007/s12207-020-09398-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Accepted: 11/04/2020] [Indexed: 11/24/2022]
Abstract
Psychologists who work as therapists or administrators, or who engage in forensic practice in criminal justice settings, find it daunting to transition into practice in civil cases involving personal injury, namely psychological injury from the psychological perspective. In civil cases, psychological injury arises from allegedly deliberate or negligent acts of the defendant(s) that the plaintiff contends caused psychological conditions to appear. These alleged acts are disputed in courts and other tribunals. Conditions considered in psychological injury cases include posttraumatic stress disorder, depression, chronic pain conditions, and sequelae of traumatic brain injury. This article outlines a detailed case sequence from referral through the end of expert testimony to guide the practitioner to work effectively in this field of practice. It addresses the rules and regulations that govern admissibility of expert evidence in court. The article provides ethical and professional guidance throughout, including best practices in assessment and testing, and emphasizes evidence-based forensic practice.
Collapse
Affiliation(s)
| | | | - Gerald Young
- Glendon College, York University, Toronto, Canada
| |
Collapse
|
34
|
Young G. Thirty Complexities and Controversies in Mild Traumatic Brain Injury and Persistent Post-concussion Syndrome: a Roadmap for Research and Practice. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09395-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
35
|
Fokas KF, Brovko JM. Assessing Symptom Validity in Psychological Injury Evaluations Using the MMPI-2-RF and the PAI: an Updated Review. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09393-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
36
|
Paix EL, Tweedy SM, Connick MJ, Beckman EM. Differentiating maximal and submaximal voluntary strength measures for the purposes of medico-legal assessments and para sport classification: A systematic review. Eur J Sport Sci 2020; 21:1518-1550. [PMID: 33028160 DOI: 10.1080/17461391.2020.1834623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Measurement of maximum voluntary muscle contractions are effort-dependent - valid measurement requires maximal voluntary effort (MVE) from participants. Submaximal efforts (SMEs) yield invalid and potentially misleading results. This is particularly problematic in medico-legal and Para sport assessments where low strength scores may confer a personal advantage. Therefore, objective methods for accurately differentiating MVE and SME are required. This systematic review aimed to identify, appraise and synthesise evidence from scientific studies evaluating the validity of objective methods for differentiating MVE from SME during maximal voluntary contractions. Four electronic databases were searched for original research articles published in English and secondary references appraised for relevance yielding 25 studies for review. Methods were categorised based on eight distinct underlying theories. For isokinetic strength assessment, methods based on two theories - Strength-measure Ratios and Inter-Trial Strength Consistency - correctly classified 100% MVE and > 92% SME. Consequently, research evaluating the relative suitability of these methods for translation into practice is warranted. During isometric strength assessments, methods based on Deceptive Visual Feedback and Force-length properties warrant further investigation. Both methods yielded statistically significant differences between MVE and SME, with minimal overlap in values, but their sensitivity and specificity have not been evaluated.
Collapse
Affiliation(s)
- Emily L Paix
- School of Human Movement and Nutrition Sciences, University of Queensland, Brisbane, Australia
| | - Sean M Tweedy
- School of Human Movement and Nutrition Sciences, University of Queensland, Brisbane, Australia
| | - Mark J Connick
- School of Human Movement and Nutrition Sciences, University of Queensland, Brisbane, Australia
| | - Emma M Beckman
- School of Human Movement and Nutrition Sciences, University of Queensland, Brisbane, Australia
| |
Collapse
|
37
|
Dandachi-FitzGerald B, Merckelbach H, Bošković I, Jelicic M. Do You Know People Who Feign? Proxy Respondents About Feigned Symptoms. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09387-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
AbstractWe asked students, clinicians, and people from the general population attending a public university lecture (n = 401) whether they knew others who (had) feigned symptoms. We also asked about the type of symptoms and the motives involved. A slight majority of proxy respondents (59%) reported that they knew a person who (had) feigned symptoms, and 34% knew a person who had admitted to them having feigned symptoms. According to our respondents, the most often feigned symptoms were headache/migraine, common cold/fever, and stomachache/nausea, and the most important reasons for doing so were sick leave from work, excusing a failure, and seeking attention from others. We conclude that feigning is part of the normal behavioral repertoire of people and has little to do with deviant personality traits and/or criminal motives. Also, the current emphasis in the neuropsychological literature on malingering, i.e., feigning motivated by external incentives, might be one-sided given that psychological motives, notably seeking attention from others and excuse making, seem to be important determinants of everyday feigning.
Collapse
|
38
|
van Minnen A, van Dalen B, Voorendonk EM, Wagenmans A, de Jongh A. The effects of symptom overreporting on PTSD treatment outcome. Eur J Psychotraumatol 2020; 11:1794729. [PMID: 33029329 PMCID: PMC7473171 DOI: 10.1080/20008198.2020.1794729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND It is often assumed that individuals with posttraumatic stress disorder (PTSD) who overreport their symptoms should be excluded from trauma-focused treatments. OBJECTIVE To investigate the effects of a brief, intensive trauma-focused treatment programme for individuals with PTSD who are overreporting symptoms. METHODS Individuals (n = 205) with PTSD participated in an intensive trauma-focused treatment programme consisting of EMDR and prolonged exposure (PE) therapy, physical activity and psycho-education. Assessments took place at pre- and post-treatment (Structured Inventory of Malingered Symptomatology; SIMS, Clinician Administered PTSD Scale for DSM-5; CAPS-5). RESULTS Using a high SIMS cut-off of 24 or above, 14.1% (n = 29) had elevated SIMS scores (i.e. 'overreporters'). The group of overreporters showed significant decreases in PTSD-symptoms, and these treatment results did not differ significantly from other patients. Although some patients (35.5%) remained overreporters at post-treatment, SIMS scores decreased significantly during treatment. CONCLUSION The results suggest that an intensive trauma-focused treatment not only is a feasible and safe treatment for PTSD in general, but also for individuals who overreport their symptoms.
Collapse
Affiliation(s)
- Agnes van Minnen
- Behavioural Science Institute (BSI), Radboud University Nijmegen, Nijmegen, The Netherlands.,Research Department, Research Department PSYTREC, Bilthoven, The Netherlands
| | - Birgit van Dalen
- Research Department, Research Department PSYTREC, Bilthoven, The Netherlands
| | - Eline M Voorendonk
- Behavioural Science Institute (BSI), Radboud University Nijmegen, Nijmegen, The Netherlands.,Research Department, Research Department PSYTREC, Bilthoven, The Netherlands
| | - Anouk Wagenmans
- Research Department, Research Department PSYTREC, Bilthoven, The Netherlands
| | - Ad de Jongh
- Research Department, Research Department PSYTREC, Bilthoven, The Netherlands.,Academic Centre for Dentistry Amsterdam (ACTA, University of Amsterdam and VU University Amsterdam, Amsterdam, The Netherlands.,School of Health Sciences, Salford University, Manchester, UK.,Institute of Health and Society, University of Worcester, Worcester, UK.,School of Psychology, Queen's University, Belfast, Northern Ireland
| |
Collapse
|
39
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
40
|
Identifying Novel Embedded Performance Validity Test Formulas Within the Repeatable Battery for the Assessment of Neuropsychological Status: a Simulation Study. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09382-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
41
|
Winters CL, Giromini L, Crawford TJ, Ales F, Viglione DJ, Warmelink L. An Inventory of Problems-29 (IOP-29) study investigating feigned schizophrenia and random responding in a British community sample. PSYCHIATRY, PSYCHOLOGY, AND LAW : AN INTERDISCIPLINARY JOURNAL OF THE AUSTRALIAN AND NEW ZEALAND ASSOCIATION OF PSYCHIATRY, PSYCHOLOGY AND LAW 2020; 28:235-254. [PMID: 34712094 PMCID: PMC8547855 DOI: 10.1080/13218719.2020.1767720] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Compared to other Western countries, malingering research is still relatively scarce in the United Kingdom, partly because only a few brief and easy-to-use symptom validity tests (SVTs) have been validated for use with British test-takers. This online study examined the validity of the Inventory of Problems-29 (IOP-29) in detecting feigned schizophrenia and random responding in 151 British volunteers. Each participant took three IOP-29 test administrations: (a) responding honestly; (b) pretending to suffer from schizophrenia; and (c) responding at random. Additionally, they also responded to a schizotypy measure (O-LIFE) under standard instruction. The IOP-29's feigning scale (FDS) showed excellent validity in discriminating honest responding from feigned schizophrenia (AUC = .99), and its classification accuracy was not significantly affected by the presence of schizotypal traits. Additionally, a recently introduced IOP-29 scale aimed at detecting random responding (RRS) demonstrated very promising results.
Collapse
Affiliation(s)
| | | | | | - Francesca Ales
- Department of Psychology, University of Turin, Torino, Italy
| | - Donald J Viglione
- California School of Professional Psychology, Alliant International University, San Diego, CA, USA
| | - Lara Warmelink
- Department of Psychology, Lancaster University, Lancaster, UK
| |
Collapse
|
42
|
Verifiability and Symptom Endorsement in Genuine, Exaggerated, and Malingered Pain. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09375-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
AbstractThe current study has investigated whether pure malingering, in which reported symptoms are nonexistent, partial malingering, in which existent symptoms are exaggerated, and genuine symptoms could be differentiated by applying the verifiability approach (VA) and the Self-Report Symptom Inventory (SRSI). The logic behind the VA is that deceivers’ statements contain more non-verifiable information, whereas truth tellers’ accounts include more verifiable details. The SRSI taps into over-reporting by including a mix of genuine symptoms and implausible complaints (pseudosymptoms). We checked if participants (N = 167) allocated to one of three conditions (pure malingerers vs. exaggerators vs. truth tellers) can be differentiated in their pain symptom reports’ (non)verifiability and symptom endorsement. Findings revealed that deceptive reports were lengthier than truthful statements. However, this difference was not produced by a discrepancy in non-verifiable details, but rather by a higher production of verifiable information among malingerers and exaggerators. Thus, contrary to previous findings, our results indicate that pain reports rich in verifiable information should raise doubt about their veracity. Further, truth tellers endorsed less symptoms of the SRSI than exaggerators, but not than pure malingerers. Pure malingerers and exaggerators did not differ in symptom endorsement. Thus, our findings revealed that when compared with truth tellers, exaggerators exhibited stronger over-reporting tendencies than (pure) malingerers. However, due to inconsistent findings, further investigation of the efficacy of these methods in differentiation between exaggerated and malingered reports is required.
Collapse
|
43
|
Comparison of Clinical Psychologist and Physician Beliefs and Practices Concerning Malingering: Results from a Mixed Methods Study. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09374-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
44
|
Schmidt T, Krüger M, Ullmann U. [Base Rate of Probable Malingering and its Indicators in the Assessment of Mental Disorders - Retrospective Analysis of a Sample of Forensic Psychological Evaluations]. REHABILITATION 2020; 59:231-236. [PMID: 32252123 DOI: 10.1055/a-1122-5233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
PURPOSE Estimation of a base rate of malingering in accordance with changes of the complexity of assessment in a sample of forensic psychological evaluations. METHODS We performed a retrospective analysis of 1175 psychological evaluations over the course of 16 years (2000-2015). RESULTS With the use of increasingly complex methods, inconsistencies are reported more frequently and a higher rate of feigning (47,2%) in symptom validity testing is noted. However, the overall rate of malingering is only 15,8%. A uniform multi-methods approach guarantees that there does not develop any bias in the assessment across evaluators. CONCLUSION The revealed rate of malingering matches recent reviews that call into question estimations which had yielded substantially higher rates. Symptom validity tests serve as an important decision-making tool for psychological evaluators. For the overall assessment, however, other possible inconsistencies are taken into account.
Collapse
Affiliation(s)
- Thomas Schmidt
- Medizinische Psychologie, Klinik: BG-Klinikum Bergmannstrost Halle
| | - Martin Krüger
- Medizinische Psychologie, Klinik: BG-Klinikum Bergmannstrost Halle
| | - Utz Ullmann
- Medizinische Psychologie, Klinik: BG-Klinikum Bergmannstrost Halle
| |
Collapse
|
45
|
Olla P, Rykulski N, Hurtubise JL, Bartol S, Foote R, Cutler L, Abeare K, McVinnie N, Sabelli AG, Hastings M, Erdodi LA. Short-term effects of cannabis consumption on cognitive performance in medical cannabis patients. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:647-657. [PMID: 31790276 DOI: 10.1080/23279095.2019.1681424] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
This observational study examined the acute cognitive effects of cannabis. We hypothesized that cognitive performance would be negatively affected by acute cannabis intoxication. Twenty-two medical cannabis patients from Southwestern Ontario completed the study. The majority (n = 13) were male. Mean age was 36.0 years, and mean level of education was 13.7 years. Participants were administered the same brief neurocognitive battery three times during a six-hour period: at baseline ("Baseline"), once after they consumed a 20% THC cannabis product ("THC"), and once again several hours later ("Recovery"). The average self-reported level of cannabis intoxication prior to the second assessment (i.e., during THC) was 5.1 out of 10. Contrary to expectations, performance on neuropsychological tests remained stable or even improved during the acute intoxication stage (THC; d: .49-.65, medium effect), and continued to increase during Recovery (d: .45-.77, medium-large effect). Interestingly, the failure rate on performance validity indicators increased during THC. Contrary to our hypothesis, there was no psychometric evidence for a decline in cognitive ability following THC intoxication. There are several possible explanations for this finding but, in the absence of a control group, no definitive conclusion can be reached at this time.
Collapse
Affiliation(s)
| | - Nicholas Rykulski
- College of Human Medicine, Michigan State University, Lansing, MI, USA
| | | | - Stephen Bartol
- School of Medicine, Wayne State University, Detroit, MI, USA
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nora McVinnie
- Brain-Cognition-Neuroscience Program, University of Windsor, Windsor, ON, Canada
| | - Alana G Sabelli
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Maurissa Hastings
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
46
|
Merten T, Kaminski A, Pfeiffer W. Prevalence of overreporting on symptom validity tests in a large sample of psychosomatic rehabilitation inpatients. Clin Neuropsychol 2019; 34:1004-1024. [PMID: 31775575 DOI: 10.1080/13854046.2019.1694073] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Objective: Noncredible symptom claims, regularly expected in forensic contexts, may also occur in clinical and rehabilitation referral contexts. Hidden motives and secondary gain expectations may play a significant role in clinical patients. We studied the prevalence of noncredible symptom report in patients treated for minor mental disorders in an inpatient setting.Method: Five hundred and thirty seven clinical inpatients of a psychosomatic rehabilitation center were studied (mean age: 50.2 years; native speakers of German). They were referred for treatment of depression, anxiety, somatoform disorder, adjustment disorder, and neurasthenia. Results of two symptom validity tests (Structured Inventory of Malingered Symptomatology, SIMS; Self-Report Symptom Inventory, SRSI) and the Beck Depression Inventory-II (BDI-II) were analyzed.Results: At screening level, 34.5% and 29.8% of the patients were found to presumably overreport symptoms on the SIMS and SRSI, respectively. At the standard cut score of the SRSI (maximum false positive rate: 5%), the proportion of diagnosed overreporting was 18.8%. SIMS and SRSI pseudosymptom endorsement correlated at .73. Highly elevated depressive symptom claims with BDI-II scores above 40, found in 9.3% of the patients, were associated with elevated pseudosymptom endorsement. Moreover, extended times of sick leave and higher expectations of disability pension were associated with elevated pseudosymptom endorsement.Conclusions: The prevalence of symptom overreporting in some clinical patient groups is a serious, yet underinvestigated problem. The current estimates yielded a high prevalence of distorted, noncredible symptom claims in psychosomatic rehabilitation patients. The challenges arising to health professionals working in such settings are immense and need more consideration.
Collapse
Affiliation(s)
- Thomas Merten
- Department of Neurology, Vivantes Klinikum im Friedrichshain, Berlin, Germany
| | | | | |
Collapse
|
47
|
|
48
|
Performance Validity in Collegiate Football Athletes at Baseline Neurocognitive Testing. J Head Trauma Rehabil 2019; 34:E20-E31. [DOI: 10.1097/htr.0000000000000451] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
49
|
Kaufman NK, Bush SS, Aguilar MR. What Attorneys and Factfinders Need to Know About Mild Traumatic Brain Injuries. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09355-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
50
|
Velsor S, Rogers R. Differentiating factitious psychological presentations from malingering: Implications for forensic practice. BEHAVIORAL SCIENCES & THE LAW 2019; 37:1-15. [PMID: 30225846 DOI: 10.1002/bsl.2365] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Revised: 05/31/2018] [Accepted: 06/05/2018] [Indexed: 06/08/2023]
Abstract
Practitioners and researchers have long been challenged with identifying deceptive response styles in forensic contexts, particularly when differentiating malingering from factitious presentations. The origins and the development of factitious disorders as a diagnostic classification are discussed, as well as the many challenges and limitations present with the current diagnostic conceptualization. As an alternative to a formal diagnosis, forensic practitioners may choose to consider most factitious psychological presentations (FPPs) as a dimensional construct that are classified like malingering as a V code. Building on Rogers' central motivations for malingering, the current article provides four explanatory models for FPPs; three of these parallel malingering (pathogenic, criminological, and adaptational) but differ in their central features. In addition, the nurturance model stresses how patients with FPPs attempt to use their relationship with treating professionals to fulfill their unmet psychological needs. Relying on these models, practical guidelines are recommended for evaluating FPPs in a forensic context.
Collapse
|