1
|
Finley JCA, Tufty LM, Abalos SA, Keszycki R, Woloszyn M, Shapiro G, Cerny BM, Ulrich DM, Phillips MS, Robinson AD, Soble JR. Identifying Factors that Increase False-Positive Rates on Embedded Performance Validity Testing in ADHD Evaluations. Arch Clin Neuropsychol 2025; 40:445-455. [PMID: 39492660 DOI: 10.1093/arclin/acae099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 09/10/2024] [Accepted: 10/14/2024] [Indexed: 11/05/2024] Open
Abstract
OBJECTIVE This study investigated why certain embedded performance validity indicators (EVIs) are prone to higher false-positive rates (FPRs) in attention-deficit/hyperactivity disorder (ADHD) evaluations. The first aim was to establish the relationship between FPRs and 15 EVIs derived from six cognitive tests when used independently and together among adults with ADHD who have valid test performance. The second aim was to determine which specific EVIs increase the FPRs in this population. METHOD Participants were 517 adult ADHD referrals with valid neurocognitive test performance as determined by multiple performance validity tests and established empirical criteria. FPRs were defined by the proportion of participants who scored below an empirically established EVI cutoff with ≥0.90 specificity. RESULTS EVIs derived from two of the six tests exhibited unacceptably high FPRs (>10%) when used independently, but the total FPR decreased to 8.1% when the EVIs were aggregated. Several EVIs within a sustained attention test were associated with FPRs around 11%. EVIs that did not include demographically adjusted cutoffs, specifically for race, were associated with higher FPRs around 14%. Conversely, FPRs did not significantly differ based on whether EVIs included timed versus untimed, verbal versus nonverbal, or graphomotor versus non-graphomotor components, nor whether they had raw versus standardized cut scores. CONCLUSIONS Findings suggest that practitioners should consider both the type of test from which an EVI is derived and the aggregate number of EVIs employed to minimize the FPRs in ADHD evaluations. Findings also indicate that more nuanced approaches to validity test selection and development are needed.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, 420 E Superior St, Chicago, IL, 60611, USA
| | - Logan M Tufty
- Department of Psychology, University of Illinois Chicago College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
- Department of Psychiatry, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
| | - Steven A Abalos
- Department of Psychiatry, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
| | - Rachel Keszycki
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, 420 E Superior St, Chicago, IL, 60611, USA
| | - Mary Woloszyn
- Department of Psychology, University of Illinois Chicago College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
- Department of Psychiatry, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
| | - Greg Shapiro
- Department of Psychiatry, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
- Department of Clinical Psychology, The Chicago School, 325 N Wells St, Chicago, IL, 60654, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
- Department of Psychology, Illinois Institute of Technology, 10 W 35th St, Chicago, IL, 60616, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
| | - Matthew S Phillips
- Department of Psychiatry, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
| | - Anthony D Robinson
- Department of Psychiatry, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
- Department of Neurology, University of Illinois College of Medicine, 1853 W Polk St, Chicago, IL, 60612, USA
| |
Collapse
|
2
|
Hromas G, Rolin S, Davis JJ. Racial differences in positive findings on embedded performance validity tests. APPLIED NEUROPSYCHOLOGY. ADULT 2025; 32:28-36. [PMID: 36416227 PMCID: PMC10203055 DOI: 10.1080/23279095.2022.2146504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
INTRODUCTION Embedded performance validity tests (PVTs) may show increased positive findings in racially diverse examinees. This study examined positive findings in an older adult sample of African American (AA) and European American (EA) individuals recruited as part of a study on aging and cognition. METHOD The project involved secondary analysis of deidentified National Alzheimer's Coordinating Center data (N = 22,688). Exclusion criteria included diagnosis of dementia (n = 5,550), mild cognitive impairment (MCI; n = 5,160), impaired but not MCI (n = 1,126), other race (n = 864), and abnormal Mini Mental State Examination (MMSE < 25; n = 135). The initial sample included 9,853 participants (16.4% AA). Propensity score matching matched AA and EA participants on age, education, sex, and MMSE score. The final sample included 3,024 individuals with 50% of participants identifying as AA. Premorbid ability estimates were calculated based on demographics. Failure rates on five raw score and six age-adjusted scaled score PVTs were examined by race. RESULTS Age, education, sex, MMSE, and premorbid ability estimate were not significantly different by race. Thirteen percent of AA and 3.8% of EA participants failed two or more raw score PVTs (p < .0001). On age-adjusted PVTs, 20.6% of AA and 5.9% of EA participants failed two or more (p < .0001). CONCLUSIONS PVT failure rates were significantly higher among AA participants. Findings indicate a need for cautious interpretation of embedded PVTs with underrepresented groups. Adjustments to embedded PVT cutoffs may need to be considered to improve diagnostic accuracy.
Collapse
Affiliation(s)
- Gabrielle Hromas
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Summer Rolin
- Department of Rehabilitation Medicine, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Jeremy J Davis
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
3
|
Hammond JB, Lichtenstein JD. Just the Tip of the Iceberg: a Brief Report of the Tip-of-the-Tongue Score as an Embedded Validity Indicator for the Children's Auditory and Visual Naming Tests. Arch Clin Neuropsychol 2024:acae117. [PMID: 39709637 DOI: 10.1093/arclin/acae117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 11/26/2024] [Accepted: 12/09/2024] [Indexed: 12/24/2024] Open
Abstract
OBJECTIVE Explore the tip-of-the-tongue (TOT) scores from the Children's Auditory and Visual Naming Tests (cANT, cVNT) as embedded validity indicators (EVIs). METHOD A retrospective design of 98 consecutively referred youth aged 6-15 years (M = 11.28, SD = 2.80) that completed neuropsychological evaluation at a tertiary-care academic medical center. RESULTS Invalid performance (i.e., ≥2 failed PVTs) occurred in 12.2% of the sample, with base rates of failure on individual PVTs ranging from 1.0% to 30.6%. Area under the curve (AUC) showed statistical significance for the auditory (AUC = 0.811, p = .004) but not the visual TOT. Logistic regression indicated the combination of both TOT scores with other PVTs increased correct identification of invalid performance to 85.7% versus 75% without TOT scores. CONCLUSION The utility of the TOT as a language-based EVI is one of many potential advantages of the cANT and cVNT compared to other confrontation naming tests. To confirm this, future studies with more diverse populations are warranted.
Collapse
Affiliation(s)
- Jared B Hammond
- Departments of Neurology and Psychiatry, University of Rochester Medical Center, School of Medicine and Dentistry, Rochester, NY, USA
| | - Jonathan D Lichtenstein
- Departments of Psychiatry, Pediatrics, and The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth, Hanover, NH, USA
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| |
Collapse
|
4
|
Kanser RJ, Rohling ML, Davis JJ. Determining whether false positive rates increase with performance validity test battery expansion. Clin Neuropsychol 2024:1-13. [PMID: 39415334 DOI: 10.1080/13854046.2024.2416543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 10/10/2024] [Indexed: 10/18/2024]
Abstract
OBJECTIVE Performance validity test (PVT) misclassification is an important concern for neuropsychologists. The present study determined whether expanding PVT analysis from 4-PVTs to 8-PVTs could lead to elevated rates of false positive performance validity misclassifications. METHOD Retrospective analysis of 443 patients who underwent a fixed neuropsychological test battery in a mixed clinical and forensic setting. Rates of failing two PVTs were compared to those predicted by Monte Carlo simulations when PVT analysis extended from 4-PVTs to 8-PVTs. Indeterminate performers (IDT; n = 42; those who failed two PVTs only after PVT analysis extended from 4-PVTs to 8-PVTs) were compared to a PVT-Fail group (n = 148; those who failed two PVTs in the 4-PVT battery or failed >2 PVTs). RESULTS Rate of failing two PVTs remained stable when PVT analysis extended from 4- to 8-PVTs (12.9 to 11.9%) and was significantly lower than those predicted by Monte Carlo simulations. Compared to PVT-Fail, the IDT group was significantly younger, had stronger neuropsychological test performance, and demonstrated comparable rates of forensic referral and conditions with known neurocognitive sequelae (e.g. stroke, moderate-to-severe TBI). CONCLUSIONS Monte Carlo simulations significantly overestimated rates of individuals failing two PVTs as PVT battery length doubled. IDT did not differ from PVT-Fail across variables with known PVT effects (e.g. age, referral context, neurologic diagnoses), lowering concern that this group is comprised entirely of false-positive PVT classifications. More research is needed to determine the effect of PVT battery length on validity classification accuracy.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Physical Medicine and Rehabilitation, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | | | - Jeremy J Davis
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
5
|
Finley JCA, Rodriguez VJ, Cerny BM, Chang F, Brooks JM, Ovsiew GP, Ulrich DM, Resch ZJ, Soble JR. Comparing embedded performance validity indicators within the WAIS-IV Letter-Number sequencing subtest to Reliable Digit Span among adults referred for evaluation of attention-deficit/hyperactivity disorder. Clin Neuropsychol 2024; 38:1647-1666. [PMID: 38351710 DOI: 10.1080/13854046.2024.2315738] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 12/18/2023] [Indexed: 09/26/2024]
Abstract
Objectives: This study investigated the Wechsler Adult Intelligence Scale-Fourth Edition Letter-Number Sequencing (LNS) subtest as an embedded performance validity indicator among adults undergoing an attention-deficit/hyperactivity disorder (ADHD) evaluation, and its potential incremental value over Reliable Digit Span (RDS). Method: This cross-sectional study comprised 543 adults who underwent neuropsychological evaluation for ADHD. Patients were divided into valid (n = 480) and invalid (n = 63) groups based on multiple criterion performance validity tests. Results: LNS total raw scores, age-corrected scaled scores, and age- and education-corrected T-scores demonstrated excellent classification accuracy (area under the curve of .84, .83, and .82, respectively). The optimal cutoff for LNS raw score (≤16), age-corrected scaled score (≤7), and age- and education-corrected T-score (≤36) yielded .51 sensitivity and .94 specificity. Slightly lower sensitivity (.40) and higher specificity (.98) was associated with a more conservative T-score cutoff of ≤33. Multivariate models incorporating both LNS and RDS improved classification accuracy (area under the curve of .86), and LNS scores explained a significant but modest proportion of variance in validity status above and beyond RDS. Chaining LNS T-score of ≤33 with RDS cutoff of ≤7 increased sensitivity to .69 while maintaining ≥.90 specificity. Conclusions: Findings provide preliminary evidence for the criterion and construct validity of LNS as an embedded validity indicator in ADHD evaluations. Practitioners are encouraged to use LNS T-score cutoff of ≤33 or ≤36 to assess the validity of obtained test data. Employing either of these LNS cutoffs with RDS may enhance the detection of invalid performance.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Violeta J Rodriguez
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois, Urbana-Champaign, Champaign, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Fini Chang
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois Chicago College of Medicine, Chicago, IL, USA
| | - Julia M Brooks
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois Chicago College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois Chicago College of Medicine, Chicago, IL, USA
| |
Collapse
|
6
|
Grewal KS, Trites M, Kirk A, MacDonald SWS, Morgan D, Gowda-Sookochoff R, O'Connell ME. CVLT-II short form forced choice recognition in a clinical dementia sample: Cautions for performance validity assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:839-848. [PMID: 35635794 DOI: 10.1080/23279095.2022.2079088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Performance validity tests are susceptible to false positives from genuine cognitive impairment (e.g., dementia); this has not been explored with the short form of the California Verbal Learning Test II (CVLT-II-SF). In a memory clinic sample, we examined whether CVLT-II-SF Forced Choice Recognition (FCR) scores differed across diagnostic groups, and how the severity of impairment [Clinical Dementia Rating Sum of Boxes (CDR-SOB) or Mini-Mental State Examination (MMSE)] modulated test performance. Three diagnostic groups were identified: subjective cognitive impairment (SCI; n = 85), amnestic mild cognitive impairment (a-MCI; n = 17), and dementia due to Alzheimer's Disease (AD; n = 50). Significant group differences in FCR were observed using one-way ANOVA; post-hoc analysis indicated the AD group performed significantly worse than the other groups. Using multiple regression, FCR performance was modeled as a function of the diagnostic group, severity (MMSE or CDR-SOB), and their interaction. Results yielded significant main effects for MMSE and diagnostic group, with a significant interaction. CDR-SOB analyses were non-significant. Increases in impairment disproportionately impacted FCR performance for persons with AD, adding caution to research-based cutoffs for performance validity in dementia. Caution is warranted when assessing performance validity in dementia populations. Future research should examine whether CVLT-II-SF-FCR is appropriately specific for best-practice testing batteries for dementia.
Collapse
Affiliation(s)
- Karl S Grewal
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| | - Michaella Trites
- Department of Psychology, University of Victoria, Victoria, Canada
| | - Andrew Kirk
- Department of Medicine, University of Saskatchewan, Saskatoon, Canada
| | | | - Debra Morgan
- Canadian Centre for Health and Safety in Agriculture, University of Saskatchewan, Saskatoon, Canada
| | | | - Megan E O'Connell
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
7
|
Lace JW, Sanborn V, Galioto R. Standalone Performance Validity Tests May Be Differentially Related to Measures of Working Memory, Processing Speed, and Verbal Memory in Patients With Multiple Sclerosis. Assessment 2024; 31:732-744. [PMID: 37303186 DOI: 10.1177/10731911231178289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Cognitive functioning may account for minimal levels (i.e., 5%-14%) of variance of performance validity test (PVT) scores in clinical examinees. The present study extended this research twofold: (a) by determining the variance cognitive functioning explains within three distinct PVTs (b) in a sample of patients with multiple sclerosis (pwMS). Seventy-five pwMS (Mage = 48.50, 70.6% female, 80.9% White) completed the Victoria Symptom Validity Test (VSVT), Word Choice Test (WCT), Dot Counting Test (DCT), and three objective measures of working memory, processing speed, and verbal memory as part of clinical neuropsychological assessment. Regression analyses in credible groups (ns ranged from 54 to 63) indicated that cognitive functioning explained 24% to 38% of the variance in logarithmically transformed PVT variables. Variance from cognitive testing differed across PVTs: verbal memory significantly influenced both VSVT and WCT scores; working memory influenced VSVT and DCT scores; and processing speed influenced DCT scores. The WCT appeared least related to cognitive functioning of the included PVTs. Alternative plausible explanations, including the apparent domain/modality specificity hypothesis of PVTs versus the potential sensitivity of these PVTs to neurocognitive dysfunction in pwMS were discussed. Continued psychometric investigations into factors affecting performance validity, especially in multiple sclerosis, are warranted.
Collapse
Affiliation(s)
- John W Lace
- Cleveland Clinic Foundation, OH, USA
- Prevea Health, Green Bay, WI, USA
| | - Victoria Sanborn
- Kent State University, OH, USA
- VA Boston Healthcare System, Boston, MA, USA
| | - Rachel Galioto
- Cleveland Clinic Foundation, Mellen Center for Multiple Sclerosis, OH, USA
| |
Collapse
|
8
|
Finley JCA, Brooks JM, Nili AN, Oh A, VanLandingham HB, Ovsiew GP, Ulrich DM, Resch ZJ, Soble JR. Multivariate examination of embedded indicators of performance validity for ADHD evaluations: A targeted approach. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-14. [PMID: 37703401 DOI: 10.1080/23279095.2023.2256440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
This study investigated the individual and combined utility of 10 embedded validity indicators (EVIs) within executive functioning, attention/working memory, and processing speed measures in 585 adults referred for an attention-deficit/hyperactivity disorder (ADHD) evaluation. Participants were categorized into invalid and valid performance groups as determined by scores from empirical performance validity indicators. Analyses revealed that all of the EVIs could meaningfully discriminate invalid from valid performers (AUCs = .69-.78), with high specificity (≥90%) but low sensitivity (19%-51%). However, none of them explained more than 20% of the variance in validity status. Combining any of these 10 EVIs into a multivariate model significantly improved classification accuracy, explaining up to 36% of the variance in validity status. Integrating six EVIs from the Stroop Color and Word Test, Trail Making Test, Verbal Fluency Test, and Wechsler Adult Intelligence Scale-Fourth Edition was as efficacious (AUC = .86) as using all 10 EVIs together. Failing any two of these six EVIs or any three of the 10 EVIs yielded clinically acceptable specificity (≥90%) with moderate sensitivity (60%). Findings support the use of multivariate models to improve the identification of performance invalidity in ADHD evaluations, but chaining multiple EVIs may only be helpful to an extent.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Julia M Brooks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois at Chicago, Chicago, IL, USA
| | - Amanda N Nili
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Medical Social Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Hannah B VanLandingham
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
9
|
Kanser RJ, Logan PM, Steward KA, Vanderbleek EN, Kamper JE. Specificity of Embedded Performance Validity Tests in Elderly Veterans with Mild and Major Neurocognitive Disorder. ARCHIVES OF CLINICAL NEUROPSYCHOLOGY : THE OFFICIAL JOURNAL OF THE NATIONAL ACADEMY OF NEUROPSYCHOLOGISTS 2022:6964520. [PMID: 36578198 DOI: 10.1093/arclin/acac106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/03/2022] [Indexed: 12/30/2022]
Abstract
OBJECTIVE This study explored the specificity of four embedded performance validity tests (PVTs) derived from common neuropsychological tasks in a sample of older veterans with verified cognitive decline and whose performance was deemed valid by licensed psychologists. METHOD Participants were 180 veterans who underwent comprehensive neuropsychological evaluation, were determined to have valid performance following profile analysis/conceptualization, and were diagnosed with mild neurocognitive disorder (i.e., MCI; n = 64) or major neurocognitive disorder (i.e., Dementia; n = 116). All participants completed at least one of four embedded PVTs: Reliable Digit Span (RDS), California Verbal Learning Test-2nd ed. Short Form (CVLT-II SF) Forced choice, Trails B:A, and Delis-Kaplan Executive Function System (DKEFS) Letter and Category Fluency. RESULTS Adequate specificity (i.e., ≥90%) was achieved at modified cut-scores for all embedded PVTs across MCI and Dementia groups. Trails B:A demonstrated near perfect specificity at its traditional cut-score (Trails B:A < 1.5). RDS ≤ 5 and CVLT-II SF Forced Choice ≤7 led to <10% false positive classification errors across MCI and dementia groups. DKEFS Letter and Category Fluency achieved 90% specificity at extremely low normative cut-scores. CONCLUSIONS RDS, Trails B:A, and CVLT-II SF Forced Choice reflect promising embedded PVTs in the context of dementia evaluations. DKEFS Letter and Category Fluency appear too sensitive to genuine neurocognitive decline and, therefore, are inappropriate PVTs in adults with MCI or dementia. Additional research into embedded PVT sensitivity (via known-groups or analogue designs) in MCI and dementia is needed.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Physical Medicine and Rehabilitation, University of North Carolina School of Medicine, Chapel Hill, NC, USA.,Mental Health and Behavioral Science, James A. Haley Veterans' Hospital, Tampa, FL, USA
| | - Patrick M Logan
- Mental Health and Behavioral Science, James A. Haley Veterans' Hospital, Tampa, FL, USA.,Mental Health and Behavioral Science, Louisville VA Medical Center, Louisville, KY, USA
| | - Kayla A Steward
- Mental Health and Behavioral Science, James A. Haley Veterans' Hospital, Tampa, FL, USA
| | - Emily N Vanderbleek
- Mental Health and Behavioral Science, James A. Haley Veterans' Hospital, Tampa, FL, USA
| | - Joel E Kamper
- Mental Health and Behavioral Science, James A. Haley Veterans' Hospital, Tampa, FL, USA
| |
Collapse
|
10
|
Corriveau-Lecavalier N, Alden EC, Stricker NH, Machulda MM, Jones DT. OUP accepted manuscript. Arch Clin Neuropsychol 2022; 37:1199-1207. [PMID: 35435228 PMCID: PMC9396449 DOI: 10.1093/arclin/acac016] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/15/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Individuals with early-onset dysexecutive Alzheimer's disease (dAD) have high rates of failed performance validity testing (PVT), which can lead to symptom misinterpretation and misdiagnosis. METHOD The aim of this retrospective study is to evaluate rates of failure on a common PVT, the test of memory malingering (TOMM), in a sample of clinical patients with biomarker-confirmed early-onset dAD who completed neuropsychological testing. RESULTS We identified seventeen patients with an average age of symptom onset at 52.25 years old. Nearly fifty percent of patients performed below recommended cut-offs on Trials 1 and 2 of the TOMM. Four of six patients who completed outside neuropsychological testing were misdiagnosed with alternative etiologies to explain their symptomatology, with two of these patients' performances deemed unreliable based on the TOMM. CONCLUSIONS Low scores on the TOMM should be interpreted in light of contextual and optimally biological information and do not necessarily rule out a neurodegenerative etiology.
Collapse
Affiliation(s)
- Nick Corriveau-Lecavalier
- Corresponding author at: 200 First Street S.W., Rochester, MN 55905, USA. Tel/Fax: 507-266-4106; E-mail address:
| | - Eva C Alden
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN, USA
| | - Nikki H Stricker
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN, USA
| | - Mary M Machulda
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN, USA
| | - David T Jones
- Department of Neurology, Mayo Clinic, Rochester, MN, USA
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
11
|
Sanborn V, Lace J, Gunstad J, Galioto R. Considerations regarding noncredible performance in the neuropsychological assessment of patients with multiple sclerosis: A case series. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 30:458-467. [PMID: 34514920 DOI: 10.1080/23279095.2021.1971229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Determining the validity of data during clinical neuropsychological assessment is crucial for proper interpretation, and extensive literature has emphasized myriad methods of doing so in diverse samples. However, little research has considered noncredible presentation in persons with multiple sclerosis (pwMS). PwMS often experience one or more factors known to impact validity of data, including major neurocognitive impairment, psychological distress/psychogenic interference, and secondary gain. This case series aimed to illustrate the potential relationships between these factors and performance validity testing in pwMS. Six cases from an IRB-approved database containing pwMS referred for neuropsychological assessment at a large, academic medical center involving at least one of the above-stated factors were identified. Backgrounds, neuropsychological test data, and clinical considerations for each were reviewed. Interestingly, no pwMS diagnosed with major neurocognitive impairment was found to have noncredible performance, nor was any patient with noncredible performance in the absence of notable psychological distress. Given the variability of noncredible performance and multiplicity of factors affecting performance validity in pwMS, clinicians are strongly encouraged to consider psychometrically appropriate methods for evaluating validity of cognitive data in pwMS. Additional research aiming to elucidate base rates of, mechanisms begetting, and methods for assessing noncredible performance in pwMS is imperative.
Collapse
Affiliation(s)
| | - John Lace
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA
| | - John Gunstad
- Psychological Sciences, Kent State University, Kent, OH, USA.,Brain Health Research Institute, Kent State University, Kent, OH, USA
| | - Rachel Galioto
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA.,Cleveland Clinic, Mellen Center for Multiple Sclerosis, Cleveland, OH, USA
| |
Collapse
|
12
|
Patrick SD, Rapport LJ, Kanser RJ, Hanks RA, Bashem JR. Performance validity assessment using response time on the Warrington Recognition Memory Test. Clin Neuropsychol 2021; 35:1154-1173. [PMID: 32068486 DOI: 10.1080/13854046.2020.1716997] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 01/07/2020] [Accepted: 01/09/2020] [Indexed: 10/25/2022]
Abstract
OBJECTIVE The present study tested the incremental utility of response time (RT) on the Warrington Recognition Memory Test - Words (RMT-W) in classifying bona fide versus feigned TBI. METHOD Participants were 173 adults: 55 with moderate to severe TBI, 69 healthy comparisons (HC) instructed to perform their best, and 49 healthy adults coached to simulate TBI (SIM). Participants completed a computerized version of the RMT-W in the context of a comprehensive neuropsychological battery. Groups were compared on RT indices including mean RT (overall, correct trials, incorrect trials) and variability, as well as the traditional RMT-W accuracy score. RESULTS Several RT indices differed significantly across groups, although RMT-W accuracy predicted group membership more strongly than any individual RT index. SIM showed longer average RT than both TBI and HC. RT variability and RT for incorrect trials distinguished SIM-HC but not SIM-TBI comparisons. In general, results for SIM-TBI comparisons were weaker than SIM-HC results. For SIM-HC comparisons, classification accuracy was excellent for all multivariable models incorporating RMT-W accuracy with one of the RT indices. For SIM-TBI comparisons, classification accuracies for multivariable models ranged from acceptable to excellent discriminability. In addition to mean RT and RT on correct trials, the ratio of RT on correct items to incorrect items showed incremental predictive value to accuracy. CONCLUSION Findings support the growing body of research supporting the value of combining RT with PVTs in discriminating between verified and feigned TBI. The diagnostic accuracy of the RMT-W can be improved by incorporating RT.
Collapse
Affiliation(s)
- Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Psychology, Wayne State University, Detroit, MI, USA
- Department of Physical Medicine and Rehabilitation, Wayne State University School of Medicine, Detroit, MI, USA
| | - Jesse R Bashem
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
13
|
Nayar K, Ventura LM, DeDios-Stern S, Oh A, Soble JR. The Impact of Learning and Memory on Performance Validity Tests in a Mixed Clinical Pediatric Population. Arch Clin Neuropsychol 2021; 37:50-62. [PMID: 34050354 DOI: 10.1093/arclin/acab040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the degree to which verbal and visuospatial memory abilities influence performance validity test (PVT) performance in a mixed clinical pediatric sample. METHOD Data from 252 consecutive clinical pediatric cases (Mage=11.23 years, SD=4.02; 61.9% male) seen for outpatient neuropsychological assessment were collected. Measures of learning and memory (e.g., The California Verbal Learning Test-Children's Version; Child and Adolescent Memory Profile [ChAMP]), performance validity (Test of Memory Malingering Trial 1 [TOMM T1]; Wechsler Intelligence Scale for Children-Fifth Edition [WISC-V] or Wechsler Adult Intelligence Scale-Fourth Edition Digit Span indices; ChAMP Overall Validity Index), and intellectual abilities (e.g., WISC-V) were included. RESULTS Learning/memory abilities were not significantly correlated with TOMM T1 and accounted for relatively little variance in overall TOMM T1 performance (i.e., ≤6%). Conversely, ChAMP Validity Index scores were significantly correlated with verbal and visual learning/memory abilities, and learning/memory accounted for significant variance in PVT performance (12%-26%). Verbal learning/memory performance accounted for 5%-16% of the variance across the Digit Span PVTs. No significant differences in TOMM T1 and Digit Span PVT scores emerged between verbal/visual learning/memory impairment groups. ChAMP validity scores were lower for the visual learning/memory impairment group relative to the nonimpaired group. CONCLUSIONS Findings highlight the utility of including PVTs as standard practice for pediatric populations, particularly when memory is a concern. Consistent with the adult literature, TOMM T1 outperformed other PVTs in its utility even among the diverse clinical sample with/without learning/memory impairment. In contrast, use of Digit Span indices appear to be best suited in the presence of visuospatial (but not verbal) learning/memory concerns. Finally, the ChAMP's embedded validity measure was most strongly impacted by learning/memory performance.
Collapse
Affiliation(s)
- Kritika Nayar
- Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Lea M Ventura
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Pediatrics, University of Illinois College of Medicine, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
14
|
Sweet JJ, Heilbronner RL, Morgan JE, Larrabee GJ, Rohling ML, Boone KB, Kirkwood MW, Schroeder RW, Suhr JA. American Academy of Clinical Neuropsychology (AACN) 2021 consensus statement on validity assessment: Update of the 2009 AACN consensus conference statement on neuropsychological assessment of effort, response bias, and malingering. Clin Neuropsychol 2021; 35:1053-1106. [PMID: 33823750 DOI: 10.1080/13854046.2021.1896036] [Citation(s) in RCA: 196] [Impact Index Per Article: 49.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Objective: Citation and download data pertaining to the 2009 AACN consensus statement on validity assessment indicated that the topic maintained high interest in subsequent years, during which key terminology evolved and relevant empirical research proliferated. With a general goal of providing current guidance to the clinical neuropsychology community regarding this important topic, the specific update goals were to: identify current key definitions of terms relevant to validity assessment; learn what experts believe should be reaffirmed from the original consensus paper, as well as new consensus points; and incorporate the latest recommendations regarding the use of validity testing, as well as current application of the term 'malingering.' Methods: In the spring of 2019, four of the original 2009 work group chairs and additional experts for each work group were impaneled. A total of 20 individuals shared ideas and writing drafts until reaching consensus on January 21, 2021. Results: Consensus was reached regarding affirmation of prior salient points that continue to garner clinical and scientific support, as well as creation of new points. The resulting consensus statement addresses definitions and differential diagnosis, performance and symptom validity assessment, and research design and statistical issues. Conclusions/Importance: In order to provide bases for diagnoses and interpretations, the current consensus is that all clinical and forensic evaluations must proactively address the degree to which results of neuropsychological and psychological testing are valid. There is a strong and continually-growing evidence-based literature on which practitioners can confidently base their judgments regarding the selection and interpretation of validity measures.
Collapse
Affiliation(s)
- Jerry J Sweet
- Department of Psychiatry & Behavioral Sciences, NorthShore University HealthSystem, Evanston, IL, USA
| | | | | | | | - Martin L Rohling
- Psychology Department, University of South Alabama, Mobile, AL, USA
| | - Kyle B Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Michael W Kirkwood
- Department of Physical Medicine & Rehabilitation, University of Colorado School of Medicine and Children's Hospital Colorado, Aurora, CO, USA
| | - Ryan W Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine, Wichita, KS, USA
| | - Julie A Suhr
- Psychology Department, Ohio University, Athens, OH, USA
| | | |
Collapse
|
15
|
McWhirter L, Ritchie CW, Stone J, Carson A. Performance validity test failure in clinical populations-a systematic review. J Neurol Neurosurg Psychiatry 2020; 91:945-952. [PMID: 32651247 DOI: 10.1136/jnnp-2020-323776] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 06/03/2020] [Accepted: 06/07/2020] [Indexed: 12/27/2022]
Abstract
Performance validity tests (PVTs) are widely used in attempts to quantify effort and/or detect negative response bias during neuropsychological testing. However, it can be challenging to interpret the meaning of poor PVT performance in a clinical context. Compensation-seeking populations predominate in the PVT literature. We aimed to establish base rates of PVT failure in clinical populations without known external motivation to underperform. We searched MEDLINE, EMBASE and PsycINFO for studies reporting PVT failure rates in adults with defined clinical diagnoses, excluding studies of active or veteran military personnel, forensic populations or studies of participants known to be litigating or seeking disability benefits. Results were summarised by diagnostic group and implications discussed. Our review identified 69 studies, and 45 different PVTs or indices, in clinical populations with intellectual disability, degenerative brain disease, brain injury, psychiatric disorders, functional disorders and epilepsy. Various pass/fail cut-off scores were described. PVT failure was common in all clinical groups described, with failure rates for some groups and tests exceeding 25%. PVT failure is common across a range of clinical conditions, even in the absence of obvious incentive to underperform. Failure rates are no higher in functional disorders than in other clinical conditions. As PVT failure indicates invalidity of other attempted neuropsychological tests, the finding of frequent and unexpected failure in a range of clinical conditions raises important questions about the degree of objectivity afforded to neuropsychological tests in clinical practice and research.
Collapse
Affiliation(s)
- Laura McWhirter
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Craig W Ritchie
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Jon Stone
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Alan Carson
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
16
|
A Meta-Analysis of Neuropsychological Effort Test Performance in Psychotic Disorders. Neuropsychol Rev 2020; 30:407-424. [PMID: 32766940 DOI: 10.1007/s11065-020-09448-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 07/15/2020] [Indexed: 12/28/2022]
Abstract
Psychotic disorders are characterized by a generalized neurocognitive deficit (i.e., performance 1.5 SD below controls across neuropsychological domains with no specific profile of differential deficits). A motivational account of the generalized neurocognitive deficit has been proposed, which attributes poor neuropsychological testing performance to low effort. However, findings are inconsistent regarding effort test failure rate in individuals with psychotic disorders across studies (0-72%), and moderators are unclear, making it difficult to know whether the motivational explanation is viable. To address these issues, a meta-analysis was performed on data from 2205 individuals with psychotic disorders across 19 studies with 24 independent effects. Effort failure rate was examined along with moderators of effort test type, forensic status, IQ, positive symptoms, negative symptoms, diagnosis, age, gender, education, and antipsychotic use. The pooled weighted effort test failure rate was 18% across studies and there was a moderate pooled association between effort failure rate and global neurocognitive performance (r = .57). IQ and education significantly moderated failure rate. Collectively, these findings suggest that a nontrivial proportion of individuals with a psychotic disorder fail effort testing, and failure rate is associated with global neuropsychological impairment. However, given that effort tests are not immune to the effects of IQ in psychotic disorders, these results cannot attest to the viability of the motivational account of the generalized neurocognitive deficit. Furthermore, the significant moderating effect of IQ and education on effort test performance suggests that effort tests have questionable validity in this population and should be interpreted with caution.
Collapse
|
17
|
Abstract
OBJECTIVE Performance and symptom validity tests (PVTs and SVTs) measure the credibility of the assessment results. Cognitive impairment and apathy potentially interfere with validity test performance and may thus lead to an incorrect (i.e., false-positive) classification of the patient's scores as non-credible. The study aimed at examining the false-positive rate of three validity tests in patients with cognitive impairment and apathy. METHODS A cross-sectional, comparative study was performed in 56 patients with dementia, 41 patients with mild cognitive impairment, and 41 patients with Parkinson's disease. Two PVTs - the Test of Memory Malingering (TOMM) and the Dot Counting Test (DCT) - and one SVT - the Structured Inventory of Malingered Symptomatology (SIMS) - were administered. Apathy was measured with the Apathy Evaluation Scale, and severity of cognitive impairment with the Mini Mental State Examination. RESULTS The failure rate was 13.7% for the TOMM, 23.8% for the DCT, and 12.5% for the SIMS. Of the patients with data on all three tests (n = 105), 13.5% failed one test, 2.9% failed two tests, and none failed all three. Failing the PVTs was associated with cognitive impairment, but not with apathy. Failing the SVT was related to apathy, but not to cognitive impairment. CONCLUSIONS In patients with cognitive impairment or apathy, failing one validity test is not uncommon. Validity tests are differentially sensitive to cognitive impairment and apathy. However, the rule that at least two validity tests should be failed to identify non-credibility seemed to ensure a high percentage of correct classification of credibility.
Collapse
|
18
|
Czornik M, Merten T, Lehrner J. Symptom and performance validation in patients with subjective cognitive decline and mild cognitive impairment. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:269-281. [PMID: 31267787 DOI: 10.1080/23279095.2019.1628761] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Nonauthentic symptom claims (overreporting) and invalid test results (underperformance) can regularly be expected in a forensic context, but may also occur in clinical referrals. While the applicability of symptom and performance validity tests in samples of dementia patients is well studied, the same is not true for patients with subjective cognitive decline (SCD) and mild cognitive impairment (MCI). A sample of 54 memory-clinic outpatients with evidence of SCD or MCI was studied. We evaluated the rate of positive results in three validity measures. A total of 7.4% of the patients showed probable negative response bias in the Word Memory Test. The rate of positive results on the Structured Inventory of Malingered Symptomatology was 14.8% while only one participant (1.9%) scored positive on the Self-Report Symptom Inventory using the standard cutoff. The two questionnaires were moderately correlated at .67. In a combined analysis of all results, five of the patients (9.3%) were judged to show evidence of probable negative response bias (or probably feigned neurocognitive impairment). In the current study, a relatively small but nontrivial rate of probable response distortions was found in a memory-clinic sample. However, it remains a methodological challenge for this kind of research to reliably distinguish between false-positive and correct-positive classifications in clinical patient groups.
Collapse
Affiliation(s)
- Manuel Czornik
- Department of Neurology, Medical University of Vienna, Vienna, Austria.,Department of Psychiatry and Psychotherapy, University of Tuebingen, Tuebingen, Germany.,Institute of Medical Psychology and Behavioural Neurobiology, University of Tuebingen, Tuebingen, Germany
| | - Thomas Merten
- Department of Neurology, Vivantes Klinikum im Friedrichshain, Berlin, Germany
| | - Johann Lehrner
- Department of Neurology, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
19
|
Orthey R, Vrij A, Meijer E, Leal S, Blank H. Eliciting Response Bias Within Forced Choice Tests to Detect Random Responders. Sci Rep 2019; 9:8724. [PMID: 31217488 PMCID: PMC6584661 DOI: 10.1038/s41598-019-45292-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Accepted: 05/31/2019] [Indexed: 11/14/2022] Open
Abstract
The Forced Choice Test (FCT) can be used to detect malingered loss of memory or sensory deficits. In this test, examinees are presented with two stimuli, one correct and one incorrect, in regards to a specific event or a perceptual discrimination task. The task is to select the correct answer alternative, or guess if it is unknown. Genuine impairment is associated with test scores that fall within chance performance. In contrast, malingered impairment is associated with purposeful avoidance of correct information, resulting in below chance performance. However, a substantial proportion of malingerers intentionally randomize their responses, and are missed by the test. Here we examine whether a ‘runs test’ and a ‘within test response ‘bias’ have diagnostic value to detect this intentional randomization. We instructed 73 examinees to malinger red/green blindness and subjected them to a FCT. For half of the examinees we manipulated the ambiguity between answer alternatives over the test trials in order to elicit a response bias. Compared to a sample of 10,000 cases of computer generated genuine performance, the runs test and response bias both detected malingered performance better than chance.
Collapse
Affiliation(s)
- Robin Orthey
- Department of Psychology, University of Portsmouth, PO1 2DY, Portsmouth, United Kingdom. .,Faculty of Psychology & Neuroscience, Maastricht University, 622MD, Maastricht, The Netherlands.
| | - Aldert Vrij
- Department of Psychology, University of Portsmouth, PO1 2DY, Portsmouth, United Kingdom
| | - Ewout Meijer
- Faculty of Psychology & Neuroscience, Maastricht University, 622MD, Maastricht, The Netherlands
| | - Sharon Leal
- Department of Psychology, University of Portsmouth, PO1 2DY, Portsmouth, United Kingdom
| | - Hartmut Blank
- Department of Psychology, University of Portsmouth, PO1 2DY, Portsmouth, United Kingdom
| |
Collapse
|
20
|
Bodner T, Merten T, Benke T. Performance validity measures in clinical patients with aphasia. J Clin Exp Neuropsychol 2019; 41:476-483. [DOI: 10.1080/13803395.2019.1579783] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Thomas Bodner
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Thomas Merten
- Department of Neurology, Vivantes Klinikum im Friedrichshain, Berlin, Germany
| | - Thomas Benke
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| |
Collapse
|