1
|
Buchholz AS, Reckess GZ, Del Bene VA, Testa SM, Crawford JL, Schretlen DJ. Within-Person Test Score Distributions: How Typical Is "Normal"? Assessment 2024; 31:1089-1099. [PMID: 37876148 DOI: 10.1177/10731911231201159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2023]
Abstract
We evaluated within-person variability across a cognitive test battery by analyzing the shape of the distribution of each individual's scores within a battery of tests. We hypothesized that most healthy adults would produce test scores that are normally distributed around their own personal battery-wide, within-person (wp) mean. Using cross-sectional data from 327 neurologically healthy adults, we computed each person's mean, standard deviation, skew, and kurtosis for 30 neuropsychological measures. Raw scores were converted to T-scores using three degrees of calibration: (a) none, (b) age, and (c) age, sex, race, education, and estimated premorbid IQ. Regardless of calibration, no participant showed abnormal within-person skew (wpskew) and only 10 (3.1%) to 16 (4.9%) showed wpkurtosis greater than 2. If replicated in other samples and measures, these findings could illuminate how healthy individuals are endowed with different cognitive abilities and provide the foundation for a new method of inference in clinical neuropsychology.
Collapse
Affiliation(s)
| | - Gila Z Reckess
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Victor A Del Bene
- The University of Alabama at Birmingham Heersink School of Medicine, USA
| | - S Marc Testa
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Sandra and Malcolm Berman Brain & Spine Institute, Baltimore, MD, USA
| | | | | |
Collapse
|
2
|
Crișan I, Ali S, Cutler L, Matei A, Avram L, Erdodi LA. Geographic variability in limited English proficiency: A cross-cultural study of cognitive profiles. J Int Neuropsychol Soc 2023; 29:972-983. [PMID: 37246143 DOI: 10.1017/s1355617723000280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
OBJECTIVE This study was designed to evaluate the effect of limited English proficiency (LEP) on neurocognitive profiles. METHOD Romanian (LEP-RO; n = 59) and Arabic (LEP-AR; n = 30) native speakers were compared to Canadian native speakers of English (NSE; n = 24) on a strategically selected battery of neuropsychological tests. RESULTS As predicted, participants with LEP demonstrated significantly lower performance on tests with high verbal mediation relative to US norms and the NSE sample (large effects). In contrast, several tests with low verbal mediation were robust to LEP. However, clinically relevant deviations from this general pattern were observed. The level of English proficiency varied significantly within the LEP-RO and was associated with a predictable performance pattern on tests with high verbal mediation. CONCLUSIONS The heterogeneity in cognitive profiles among individuals with LEP challenges the notion that LEP status is a unitary construct. The level of verbal mediation is an imperfect predictor of the performance of LEP examinees during neuropsychological testing. Several commonly used measures were identified that are robust to the deleterious effects of LEP. Administering tests in the examinee's native language may not be the optimal solution to contain the confounding effect of LEP in cognitive evaluations.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Alina Matei
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Luisa Avram
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
3
|
Tyson BT, Shahein A, Abeare CA, Baker SD, Kent K, Roth RM, Erdodi LA. Replicating a Meta-Analysis: The Search for the Optimal Word Choice Test Cutoff Continues. Assessment 2023; 30:2476-2490. [PMID: 36752050 DOI: 10.1177/10731911221147043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.
Collapse
Affiliation(s)
| | | | | | | | | | - Robert M Roth
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | |
Collapse
|
4
|
Crișan I, Sava FA. Validity assessment in Eastern Europe: cross-validation of the Dot Counting Test and MODEMM against the TOMM-1 and Rey-15 in a Romanian mixed clinical sample. Arch Clin Neuropsychol 2023:acad085. [PMID: 37961918 DOI: 10.1093/arclin/acad085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/15/2023] Open
Abstract
OBJECTIVE This study investigated performance validity in the understudied Romanian clinical population by exploring classification accuracies of the Dot Counting Test (DCT) and the first Romanian performance validity test (PVT) (Memory of Objects and Digits and Evaluation of Memory Malingering/MODEMM) in a heterogeneous clinical sample. METHODS We evaluated 54 outpatients (26 females; MAge = 62.02; SDAge = 12.3; MEducation = 2.41, SDEducation = 2.82) with the Test of Memory Malingering 1 (TOMM-1), Rey Fifteen Items Test (Rey-15) (free recall and recognition trials), DCT, MODEMM, and MMSE/MoCA as part of their neuropsychological assessment. Accuracy parameters and base failure rates were computed for the DCT and MODEMM indicators against the TOMM-1 and Rey-15. Two patient groups were constructed according to psychometrically defined credible/noncredible performance (i.e., pass/fail both TOMM-1 and Rey-15). RESULTS Similar to other cultures, a cutoff of ≥18 on the DCT E score produced the best combination between sensitivity (0.50-0.57) and specificity (≥0.90). MODEMM indicators based on recognition accuracy, inconsistencies, and inclusion false positives generated 0.75-0.86 sensitivities at ≥0.90 specificities. Multivariable models of MODEMM indicators reached perfect sensitivities at ≥0.90 specificities against two PVTs. Patients who failed the TOMM-1 and Rey-15 were significantly more likely to fail the DCT and MODEMM than patients who passed both PVTs. CONCLUSIONS Our results offer proof of concept for the DCT's cross-cultural validity and the applicability of the MODEMM on Romanian clinical examinees, further recommending the use of heterogeneous validity indicators in clinical assessments.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara 300223, Romania
| | - Florin Alin Sava
- Department of Psychology, West University of Timişoara, Timișoara 300223, Romania
| |
Collapse
|
5
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
6
|
Tyson BT, Pyne SR, Crisan I, Calamia M, Holcomb M, Giromini L, Erdodi LA. Logical memory, visual reproduction, and verbal paired associates are effective embedded validity indicators in patients with traumatic brain injury. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-10. [PMID: 36881969 DOI: 10.1080/23279095.2023.2179400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
OBJECTIVE This study was design to evaluate the potential of the recognition trials for the Logical Memory (LM), Visual Reproduction (VR), and Verbal Paired Associates (VPA) subtests of the Wechsler Memory Scales-Fourth Edition (WMS-IV) to serve as embedded performance validity tests (PVTs). METHOD The classification accuracy of the three WMS-IV subtests was computed against three different criterion PVTs in a sample of 103 adults with traumatic brain injury (TBI). RESULTS The optimal cutoffs (LM ≤ 20, VR ≤ 3, VPA ≤ 36) produced good combinations of sensitivity (.33-.87) and specificity (.92-.98). An age-corrected scaled score of ≤5 on either of the free recall trials on the VPA was specific (.91-.92) and relatively sensitive (.48-.57) to psychometrically defined invalid performance. A VR I ≤ 5 or VR II ≤ 4 had comparable specificity, but lower sensitivity (.25-.42). There was no difference in failure rate as a function of TBI severity. CONCLUSIONS In addition to LM, VR, and VPA can also function as embedded PVTs. Failing validity cutoffs on these subtests signals an increased risk of non-credible presentation and is robust to genuine neurocognitive impairment. However, they should not be used in isolation to determine the validity of an overall neurocognitive profile.
Collapse
Affiliation(s)
- Brad T Tyson
- Evergreen Neuroscience Institute, Evergreen Health Medical Center, Kirkland, WA, USA
| | | | - Iulia Crisan
- Department of Psychology, West University of Timisoara, Timisoara, Romania
| | - Matthew Calamia
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | | | | | - Laszlo A Erdodi
- Jefferson Neurobehavioral Group, New Orleans, LA, USA
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
7
|
Fearn-Smith EM, Scanlan JN, Hancock N. Exploring and Mapping Screening Tools for Cognitive Impairment and Traumatic Brain Injury in the Homelessness Context: A Scoping Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3440. [PMID: 36834133 PMCID: PMC9966671 DOI: 10.3390/ijerph20043440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 02/06/2023] [Accepted: 02/07/2023] [Indexed: 06/18/2023]
Abstract
Cognitive impairment is common amongst people experiencing homelessness, yet cognitive screening and the collection of history of brain injury rarely features in homelessness service delivery practice. The purpose of this research was to scope and map strategies for screening for the potential presence of cognitive impairment or brain injury amongst people experiencing homelessness and identify instruments that could be administered by homelessness service staff to facilitate referral for formal diagnosis and appropriate support. A search was conducted across five databases, followed by a hand search from relevant systematic reviews. A total of 108 publications were included for analysis. Described in the literature were 151 instruments for measuring cognitive function and 8 instruments screening for history of brain injury. Tools that were described in more than two publications, screening for the potential presence of cognitive impairment or history of brain injury, were included for analysis. Of those regularly described, only three instruments measuring cognitive function and three measuring history of brain injury (all of which focused on traumatic brain injury (TBI)) may be administered by non-specialist assessors. The Trail Making Test (TMT) and the Ohio State University Traumatic Brain Injury Identification Method (OSU TBI-ID) are both potentially viable tools for supporting the identification of a likely cognitive impairment or TBI history in the homelessness service context. Further population-specific research and implementation science research is required to maximise the potential for practice application success.
Collapse
Affiliation(s)
- Erin M. Fearn-Smith
- Faculty of Medicine and Health, Centre for Disability Research and Policy, The University of Sydney, Camperdown, NSW 2050, Australia
| | | | | |
Collapse
|
8
|
Poptsi E, Moraitou D, Tsardoulias E, Symeonidis AL, Papaliagkas V, Tsolaki M. R4Alz-Revised: A Tool Able to Strongly Discriminate 'Subjective Cognitive Decline' from Healthy Cognition and 'Minor Neurocognitive Disorder'. Diagnostics (Basel) 2023; 13:diagnostics13030338. [PMID: 36766444 PMCID: PMC9914647 DOI: 10.3390/diagnostics13030338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/12/2023] [Accepted: 01/13/2023] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND The diagnosis of the minor neurocognitive diseases in the clinical course of dementia before the clinical symptoms' appearance is the holy grail of neuropsychological research. The R4Alz battery is a novel and valid tool that was designed to assess cognitive control in people with minor cognitive disorders. The aim of the current study is the R4Alz battery's extension (namely R4Alz-R), enhanced by the design and administration of extra episodic memory tasks, as well as extra cognitive control tasks, towards improving the overall R4Alz discriminant validity. METHODS The study comprised 80 people: (a) 20 Healthy adults (HC), (b) 29 people with Subjective Cognitive Decline (SCD), and (c) 31 people with Mild Cognitive Impairment (MCI). The groups differed in age and educational level. RESULTS Updating, inhibition, attention switching, and cognitive flexibility tasks discriminated SCD from HC (p ≤ 0.003). Updating, switching, cognitive flexibility, and episodic memory tasks discriminated SCD from MCI (p ≤ 0.001). All the R4Alz-R's tasks discriminated HC from MCI (p ≤ 0.001). The R4Alz-R was free of age and educational level effects. The battery discriminated perfectly SCD from HC and HC from MCI (100% sensitivity-95% specificity and 100% sensitivity-90% specificity, respectively), whilst it discriminated excellently SCD from MCI (90.3% sensitivity-82.8% specificity). CONCLUSION SCD seems to be stage a of neurodegeneration since it can be objectively evaluated via the R4Alz-R battery, which seems to be a useful tool for early diagnosis.
Collapse
Affiliation(s)
- Eleni Poptsi
- School of Psychology, Faculty of Philosophy, Aristotle University of Thessaloniki (AUTh), 54124 Thessaloniki, Greece
- Laboratory of Neurodegenerative Diseases, Center for Interdisciplinary Research and Innovation, Aristotle University of Thessaloniki (CIRI—AUTh), 54124 Thessaloniki, Greece
- Day Center “Greek Association of Alzheimer’s Disease and Related Disorders (GAADRD)”, 54643 Thessaloniki, Greece
- Correspondence:
| | - Despina Moraitou
- School of Psychology, Faculty of Philosophy, Aristotle University of Thessaloniki (AUTh), 54124 Thessaloniki, Greece
- Laboratory of Neurodegenerative Diseases, Center for Interdisciplinary Research and Innovation, Aristotle University of Thessaloniki (CIRI—AUTh), 54124 Thessaloniki, Greece
- Day Center “Greek Association of Alzheimer’s Disease and Related Disorders (GAADRD)”, 54643 Thessaloniki, Greece
| | - Emmanouil Tsardoulias
- School of Electrical and Computer Engineering, Faculty of Engineering, Aristotle University of Thessaloniki (AUTh), 54124 Thessaloniki, Greece
| | - Andreas L. Symeonidis
- School of Electrical and Computer Engineering, Faculty of Engineering, Aristotle University of Thessaloniki (AUTh), 54124 Thessaloniki, Greece
| | - Vasileios Papaliagkas
- Department of Biomedical Sciences, International Hellenic University, 57001 Thessaloniki, Greece
| | - Magdalini Tsolaki
- Laboratory of Neurodegenerative Diseases, Center for Interdisciplinary Research and Innovation, Aristotle University of Thessaloniki (CIRI—AUTh), 54124 Thessaloniki, Greece
- Day Center “Greek Association of Alzheimer’s Disease and Related Disorders (GAADRD)”, 54643 Thessaloniki, Greece
- 1st Department of Neurology, Medical School, Aristotle University of Thessaloniki (AUTh), 54124 Thessaloniki, Greece
| |
Collapse
|
9
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|
10
|
Ausloos-Lozano JE, Bing-Canar H, Khan H, Singh PG, Wisinger AM, Rauch AA, Ogram Buckley CM, Petry LG, Jennette KJ, Soble JR, Resch ZJ. Assessing performance validity during attention-deficit/hyperactivity disorder evaluations: Cross-validation of non-memory embedded validity indicators. Dev Neuropsychol 2022; 47:247-257. [PMID: 35787068 DOI: 10.1080/87565641.2022.2096889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Embedded performance validity tests (PVTs) are key components of neuropsychological evaluations. However, most are memory-based and may be less useful in the assessment of attention-deficit/hyperactivity disorder (ADHD). Four non-memory-based validity indices derived from processing speed and executive functioning measures commonly included in ADHD evaluations, namely Verbal Fluency (VF) and the Trail Making Test (TMT), were cross-validated using the Rey 15-Item Test (RFIT) Recall and Recall/Recognition as memory-based comparison measures. This consecutive case series included data from 416 demographically-diverse adults who underwent outpatient neuropsychological evaluation for ADHD. Validity classifications were established, with ≤1 PVT failure of five independent criterion PVTs as indicative of valid performance (374 valid performers/42 invalid performers). Among the statistically significant validity indicators, TMT-A and TMT-B T-scores (AUCs = .707-.723) had acceptable classification accuracy ranges and sensitivities ranging from 29%-36% (≥89% specificity). RFIT Recall/Recognition produced similar results as TMT-B T-score with 42% sensitivity/90% specificity, but with lower classification accuracy. In evaluating adult ADHD, VF and TMT embedded PVTs demonstrated comparable sensitivity and specificity values to those found in other clinical populations but necessitated alternate cut-scores. Results also support use of RFIT Recall/Recognition over the standard RFIT Recall as a PVT for adult ADHD evaluations.
Collapse
Affiliation(s)
- Jenna E Ausloos-Lozano
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Hanaan Bing-Canar
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Humza Khan
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Palak G Singh
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Amanda M Wisinger
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Andrew A Rauch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Caitlin M Ogram Buckley
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Luke G Petry
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| |
Collapse
|
11
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
12
|
Uiterwijk D, Stargatt R, Crowe SF. Objective Cognitive Outcomes and Subjective Emotional Sequelae in Litigating Adults with a Traumatic Brain Injury: The Impact of Performance and Symptom Validity Measures. Arch Clin Neuropsychol 2022; 37:1662-1687. [PMID: 35704852 DOI: 10.1093/arclin/acac039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/13/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the relative contribution of performance and symptom validity in litigating adults with traumatic brain injury (TBI), as a function of TBI severity, and examined the relationship between self-reported emotional symptoms and cognitive tests scores while controlling for validity test performance. METHOD Participants underwent neuropsychological assessment between January 2012 and June 2021 in the context of compensation-seeking claims related to a TBI. All participants completed a cognitive test battery, the Personality Assessment Inventory (including symptom validity tests; SVTs), and multiple performance validity tests (PVTs). Data analyses included independent t-tests, one-way ANOVAs, correlation analyses, and hierarchical multiple regression. RESULTS A total of 370 participants were included. Atypical PVT and SVT performance were associated with poorer cognitive test performance and higher emotional symptom report, irrespective of TBI severity. PVTs and SVTs had an additive effect on cognitive test performance for uncomplicated mTBI, but less so for more severe TBI. The relationship between emotional symptoms and cognitive test performance diminished substantially when validity test performance was controlled, and validity test performance had a substantially larger impact than emotional symptoms on cognitive test performance. CONCLUSION Validity test performance has a significant impact on the neuropsychological profiles of people with TBI, irrespective of TBI severity, and plays a significant role in the relationship between emotional symptoms and cognitive test performance. Adequate validity testing should be incorporated into every neuropsychological assessment, and associations between emotional symptoms and cognitive outcomes that do not consider validity testing should be interpreted with extreme caution.
Collapse
Affiliation(s)
- Daniel Uiterwijk
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Robyn Stargatt
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Simon F Crowe
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| |
Collapse
|
13
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
14
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
15
|
Introducing the ImPACT-5: An Empirically Derived Multivariate Validity Composite. J Head Trauma Rehabil 2021; 36:103-113. [PMID: 32472832 DOI: 10.1097/htr.0000000000000576] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
OBJECTIVE To create novel Immediate Post-Concussion and Cognitive Testing (ImPACT)-based embedded validity indicators (EVIs) and to compare the classification accuracy to 4 existing EVIImPACT. METHOD The ImPACT was administered to 82 male varsity football players during preseason baseline cognitive testing. The classification accuracy of existing EVIImPACT was compared with a newly developed index (ImPACT-5A and B). The ImPACT-5A represents the number of cutoffs failed on the 5 ImPACT composite scores at a liberal cutoff (0.85 specificity); ImPACT-5B is the sum of failures on conservative cutoffs (≥0.90 specificity). RESULTS ImPACT-5A ≥1 was sensitive (0.81), but not specific (0.49) to invalid performance, consistent with EVIImPACT developed by independent researchers (0.68 sensitivity at 0.73-0.75 specificity). Conversely, ImPACT-5B ≥3 was highly specific (0.98), but insensitive (0.22), similar to Default EVIImPACT (0.04 sensitivity at 1.00 specificity). ImPACT-5A ≥3 or ImPACT-5B ≥2 met forensic standards of specificity (0.91-0.93) at 0.33 to 0.37 sensitivity. Also, the ImPACT-5s had the strongest linear relationship with clinically meaningful levels of invalid performance of existing EVIImPACT. CONCLUSIONS The ImPACT-5s were superior to the standard EVIImPACT and comparable to existing aftermarket EVIImPACT, with the flexibility to optimize the detection model for either sensitivity or specificity. The wide range of ImPACT-5 cutoffs allows for a more nuanced clinical interpretation.
Collapse
|
16
|
Dunn A, Pyne S, Tyson B, Roth R, Shahein A, Erdodi L. Critical Item Analysis Enhances the Classification Accuracy of the Logical Memory Recognition Trial as a Performance Validity Indicator. Dev Neuropsychol 2021; 46:327-346. [PMID: 34525856 DOI: 10.1080/87565641.2021.1956499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE : Replicate previous research on Logical Memory Recognition (LMRecog) and perform a critical item analysis. METHOD : Performance validity was psychometrically operationalized in a mixed clinical sample of 213 adults. Classification of the LMRecog and nine critical items (CR-9) was computed. RESULTS : LMRecog ≤20 produced a good combination of sensitivity (.30-.35) and specificity (.89-.90). CR-9 ≥5 and ≥6 had comparable classification accuracy. CR-9 ≥5 increased sensitivity by 4% over LMRecog ≤20; CR-9 ≥6 increased specificity by 6-8% over LMRecog ≤20; CR-9 ≥7 increased specificity by 8-15%. CONCLUSIONS : Critical item analysis enhances the classification accuracy of the optimal LMRecog cutoff (≤20).
Collapse
Affiliation(s)
- Alexa Dunn
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Sadie Pyne
- Windsor Neuropsychology, Windsor, Canada
| | - Brad Tyson
- Neuroscience Institute, Evergreen Neuroscience Institute, EvergreenHealth Medical Center, Kirkland, USA
| | - Robert Roth
- Neuropsychology Services, Dartmouth-Hitchcock Medical Center, USA
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
17
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
18
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
19
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
20
|
Carvalho LDF, Reis A, Colombarolli MS, Pasian SR, Miguel FK, Erdodi LA, Viglione DJ, Giromini L. Discriminating Feigned from Credible PTSD Symptoms: a Validation of a Brazilian Version of the Inventory of Problems-29 (IOP-29). PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09403-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
21
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
22
|
Gegner J, Erdodi LA, Giromini L, Viglione DJ, Bosi J, Brusadelli E. An Australian study on feigned mTBI using the Inventory of Problems - 29 (IOP-29), its Memory Module (IOP-M), and the Rey Fifteen Item Test (FIT). APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1221-1230. [PMID: 33403885 DOI: 10.1080/23279095.2020.1864375] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated the classification accuracy of the Inventory of Problems - 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample (N = 275). One third of the participants (n = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners (n = 90) were coached to avoid detection by not exaggerating, half were not (n = 92). All measures successfully discriminated between honest responders and feigners, with large effect sizes (d ≥ 1.96). The effect size for the IOP-29 (d ≥ 4.90), however, was about two-to-three times larger than those produced by the IOP-M and FIT. Also noteworthy, the IOP-29 and IOP-M showed excellent sensitivity (>90% the former, > 80% the latter), in both the coached and uncoached feigning conditions, at perfect specificity. Instead, the sensitivity of the FIT was 71.7% within the uncoached simulator group and 53.3% within the coached simulator group, at a nearly perfect specificity of 98.9%. These findings suggest that the validity of the IOP-29 and IOP-M should generalize to Australian examinees and that the IOP-29 and IOP-M likely outperform the FIT in the detection of feigned mTBI.
Collapse
Affiliation(s)
- Jennifer Gegner
- Department of Psychology, University of Wollongong, Wollongong, Australia
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | | | | | | |
Collapse
|
23
|
Kosky KM, Lace JW, Austin TA, Seitz DJ, Clark B. The utility of the Wisconsin card sorting test, 64-card version to detect noncredible attention-deficit/hyperactivity disorder. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:1231-1241. [DOI: 10.1080/23279095.2020.1864633] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Affiliation(s)
- Karen M. Kosky
- Department of Health Psychology, University of Missouri, Columbia, MO, USA
| | - John W. Lace
- Department of Neurology, Cleveland Clinic, Cleveland, OH, USA
| | - Tara A. Austin
- University of Texas at Austin Dell Medical School, Austin, TX, USA
| | - Dylan J. Seitz
- Department of Neurology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Brook Clark
- Department of Health Psychology, University of Missouri, Columbia, MO, USA
| |
Collapse
|
24
|
Erdodi LA, Abeare CA. Stronger Together: The Wechsler Adult Intelligence Scale-Fourth Edition as a Multivariate Performance Validity Test in Patients with Traumatic Brain Injury. Arch Clin Neuropsychol 2020; 35:188-204. [PMID: 31696203 DOI: 10.1093/arclin/acz032] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 06/18/2019] [Accepted: 06/22/2019] [Indexed: 12/17/2022] Open
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of a multivariate model of performance validity assessment using embedded validity indicators (EVIs) within the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). METHOD Archival data were collected from 100 adults with traumatic brain injury (TBI) consecutively referred for neuropsychological assessment in a clinical setting. The classification accuracy of previously published individual EVIs nested within the WAIS-IV and a composite measure based on six independent EVIs were evaluated against psychometrically defined non-credible performance. RESULTS Univariate validity cutoffs based on age-corrected scaled scores on Coding, Symbol Search, Digit Span, Letter-Number-Sequencing, Vocabulary minus Digit Span, and Coding minus Symbol Search were strong predictors of psychometrically defined non-credible responding. Failing ≥3 of these six EVIs at the liberal cutoff improved specificity (.91-.95) over univariate cutoffs (.78-.93). Conversely, failing ≥2 EVIs at the more conservative cutoff increased and stabilized sensitivity (.43-.67) compared to univariate cutoffs (.11-.63) while maintaining consistently high specificity (.93-.95). CONCLUSIONS In addition to being a widely used test of cognitive functioning, the WAIS-IV can also function as a measure of performance validity. Consistent with previous research, combining information from multiple EVIs enhanced the classification accuracy of individual cutoffs and provided more stable parameter estimates. If the current findings are replicated in larger, diagnostically and demographically heterogeneous samples, the WAIS-IV has the potential to become a powerful multivariate model of performance validity assessment. BRIEF SUMMARY Using a combination of multiple performance validity indicators embedded within the subtests of theWechsler Adult Intelligence Scale, the credibility of the response set can be establishedwith a high level of confidence. Multivariatemodels improve classification accuracy over individual tests. Relying on existing test data is a cost-effective approach to performance validity assessment.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
25
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
26
|
Identifying Novel Embedded Performance Validity Test Formulas Within the Repeatable Battery for the Assessment of Neuropsychological Status: a Simulation Study. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09382-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
27
|
Olla P, Rykulski N, Hurtubise JL, Bartol S, Foote R, Cutler L, Abeare K, McVinnie N, Sabelli AG, Hastings M, Erdodi LA. Short-term effects of cannabis consumption on cognitive performance in medical cannabis patients. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:647-657. [PMID: 31790276 DOI: 10.1080/23279095.2019.1681424] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
This observational study examined the acute cognitive effects of cannabis. We hypothesized that cognitive performance would be negatively affected by acute cannabis intoxication. Twenty-two medical cannabis patients from Southwestern Ontario completed the study. The majority (n = 13) were male. Mean age was 36.0 years, and mean level of education was 13.7 years. Participants were administered the same brief neurocognitive battery three times during a six-hour period: at baseline ("Baseline"), once after they consumed a 20% THC cannabis product ("THC"), and once again several hours later ("Recovery"). The average self-reported level of cannabis intoxication prior to the second assessment (i.e., during THC) was 5.1 out of 10. Contrary to expectations, performance on neuropsychological tests remained stable or even improved during the acute intoxication stage (THC; d: .49-.65, medium effect), and continued to increase during Recovery (d: .45-.77, medium-large effect). Interestingly, the failure rate on performance validity indicators increased during THC. Contrary to our hypothesis, there was no psychometric evidence for a decline in cognitive ability following THC intoxication. There are several possible explanations for this finding but, in the absence of a control group, no definitive conclusion can be reached at this time.
Collapse
Affiliation(s)
| | - Nicholas Rykulski
- College of Human Medicine, Michigan State University, Lansing, MI, USA
| | | | - Stephen Bartol
- School of Medicine, Wayne State University, Detroit, MI, USA
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nora McVinnie
- Brain-Cognition-Neuroscience Program, University of Windsor, Windsor, ON, Canada
| | - Alana G Sabelli
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Maurissa Hastings
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|