1
|
Basso MR, Tierney SM, Roper BL, Whiteside DM, Combs DR, Estevis E. A tale of two constructs: confirmatory factor analysis of performance and symptom validity tests. J Clin Exp Neuropsychol 2024; 46:840-847. [PMID: 39523656 DOI: 10.1080/13803395.2024.2425004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Accepted: 10/30/2024] [Indexed: 11/16/2024]
Abstract
BACKGROUND Performance validity (PV) and symptom validity (SV) tests assess biased responding that impact scores on neuropsychological tests. The extent to which PV and SV represent overlapping or unique constructs remains incompletely defined, especially among psychiatric patients in a non-forensic setting. The current study investigated this question using confirmatory factor analysis. METHOD Eighty-two inpatients with mood disorders were administered the Word Memory Test, and its primary indices formed a latent variable of PV. From the Minnesota Multiphasic Personality Inventory-2 the Fake Bad Scale (FBS), Response Bias Scale (RBS), and Henry-Heilbronner Index (HHI) were employed as a latent SV variable. Two models of the relationship between PV and SV were compared. One freely estimated the shared variance between SV and PV latent constructs. The other assumed the relationship between SV and PV was homogeneous, and covariance was fixed to 1.0. RESULTS In the freely estimated model, covariance between PV and SV was -0.18, and model fit was excellent (CFI = 0.098; TLI = 0.096; SRMR = 0.08). For the fixed model, the RBS, HHI, and FBS achieved low loadings on the SV construct, and model fit was poor (CFI = 0.66; TLI = 0.43; SRMR = 0.42). CONCLUSIONS PV as indexed by the WMT and SV measured by the MMPI-2 are not overlapping constructs among inpatients with mood disorders. These data imply that PV and SV represent distinct constructs in this population. Implications for practice are discussed.
Collapse
Affiliation(s)
- Michael R Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN, USA
| | - Savanna M Tierney
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN, USA
| | - Brad L Roper
- Department of Veterans Affairs Medical Center, Memphis, TX, USA
| | | | - Dennis R Combs
- Department of Psychology, University of Texas at Tyler, Tyler, TX, USA
| | | |
Collapse
|
2
|
Boress K, Gaasedelen O, Kim JH, Basso MR, Whiteside DM. Examination of the relationship between symptom and performance validity measures across referral subtypes. J Clin Exp Neuropsychol 2024; 46:162-171. [PMID: 37791494 DOI: 10.1080/13803395.2023.2261633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 09/17/2023] [Indexed: 10/05/2023]
Abstract
INTRODUCTION The extent to which performance validity (PVT) and symptom validity (SVT) tests measure separate constructs is unclear. Prior research using the Minnesota Multiphasic Personality Inventory (MMPI-2 & RF) suggested that PVTs and SVTs are separate but related constructs. However, the relationship between Personality Assessment Inventory (PAI) SVTs and PVTs has not been explored. This study aimed to replicate previous MMPI research using the PAI, exploring the relationship between PVTs and overreporting SVTs across three subsamples, neurodevelopmental (attention deficit-hyperactivity disorder (ADHD)/learning disorder), psychiatric, and mild traumatic brain injury (mTBI). METHODS Participants included 561 consecutive referrals who completed the Test of Memory Malingering (TOMM) and the PAI. Three subgroups were created based on referral question. The relationship between PAI SVTs and the PVT was evaluated through multiple regression analysis. RESULTS The results demonstrated the relationship between PAI symptom overreporting SVTs, including Negative Impression Management (NIM), Malingering Index (MAL), and Cognitive Bias Scale (CBS), and PVTs varied by referral subgroup. Specifically, overreporting on CBS but not NIM and MAL significantly predicted poorer PVT performance in the full sample and the mTBI sample. In contrast, none of the overreporting SVTs significantly predicted PVT performance in the ADHD/learning disorder sample but conversely, all SVTs predicted PVT performance in the psychiatric sample. CONCLUSIONS The results partially replicated prior research comparing SVTs and PVTs and suggested that constructs measured by SVTs and PVTs vary depending upon population. The results support the necessity of both PVTs and SVTs in clinical neuropsychological practice.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Jeong Hye Kim
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Douglas M Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
3
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Validation of the Personality Assessment Inventory (PAI) scale of scales in a mixed clinical sample. Clin Neuropsychol 2022; 36:1844-1859. [PMID: 33730975 PMCID: PMC8474121 DOI: 10.1080/13854046.2021.1900400] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This exploratory study examined the classification accuracy of three derived scales aimed at detecting cognitive response bias in neuropsychological samples. The derived scales are composed of existing scales from the Personality Assessment Inventory (PAI). A mixed clinical sample of consecutive outpatients referred for neuropsychological assessment at a large Midwestern academic medical center was utilized. Participants and Methods: Participants included 332 patients who completed study's embedded and free-standing performance validity tests (PVTs) and the PAI. PASS and FAIL groups were created based on PVT performance to evaluate the classification accuracy of the derived scales. Three new scales, Cognitive Bias Scale of Scales 1-3, (CB-SOS1-3) were derived by combining existing scales by either summing the scales together and dividing by the total number of scales summed, or by logistically deriving a variable from the contributions of several scales. Results: All of the newly derived scales significantly differentiated between PASS and FAIL groups. All of the derived SOS scales demonstrated acceptable classification accuracy (i.e. CB-SOS1 AUC = 0.72; CB-SOS2 AUC = 0.73; CB-SOS3 AUC = 0.75). Conclusions: This exploratory study demonstrates that attending to scale-level PAI data may be a promising area of research in improving prediction of PVT failure.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA,Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, NY, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
4
|
Ord AS, Shura RD, Sansone AR, Martindale SL, Taber KH, Rowland JA. Performance validity and symptom validity tests: Are they measuring different constructs? Neuropsychology 2021; 35:241-251. [PMID: 33829824 DOI: 10.1037/neu0000722] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVE To evaluate the relationships among performance validity, symptom validity, symptom self-report, and objective cognitive testing. METHOD Combat Veterans (N = 338) completed a neurocognitive assessment battery and several self-report symptom measures assessing depression, posttraumatic stress disorder (PTSD) symptoms, sleep quality, pain interference, and neurobehavioral complaints. All participants also completed two performance validity tests (PVTs) and one stand-alone symptom validity test (SVT) along with two embedded SVTs. RESULTS Results of an exploratory factor analysis revealed a three-factor solution: performance validity, cognitive performance, and symptom report (SVTs loaded on the third factor). Results of t tests demonstrated that participants who failed PVTs displayed significantly more severe symptoms and significantly worse performance on most measures of neurocognitive functioning compared to those who passed. Participants who failed a stand-alone SVT also reported significantly more severe symptomatology on all symptom report measures, but the pattern of cognitive performance differed based on the selected SVT cutoff. Multiple linear regressions revealed that both SVT and PVT failure explained unique variance in symptom report, but only PVT failure significantly predicted cognitive performance. CONCLUSIONS Performance and symptom validity tests measure distinct but related constructs. SVTs and PVTs are significantly related to both cognitive performance and symptom report; however, the relationship between symptom validity and symptom report is strongest. SVTs are also differentially related to cognitive performance and symptom report based on the utilized cutoff score. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Anna S Ord
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center (MA-MIRECC)
| | | | | | | | | | | |
Collapse
|
5
|
Udala M, Ohlhauser L, Campbell M, Langlois A, Leitner D, Libben M, Miller H. A psychometric examination of the PAI-SF in persons with recent stroke. Clin Neuropsychol 2020; 36:1471-1492. [PMID: 33054613 DOI: 10.1080/13854046.2020.1831076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
OBJECTIVE The present study evaluated the psychometric properties of the Personality Assessment Inventory-Short Form (PAI-SF) for use with patients with recent stroke. Method: Study participants (N = 170) were inpatients in a tertiary hospital in Western Canada admitted to a rehabilitation department who completed a neuropsychological evaluation as part of their care. All participants completed the full-form of the PAI (344 items) and both full- and short-form (160 items) versions were scored from the same protocol. Results: Internal consistency for the PAI-SF scales was assessed by Cronbach's coefficient alpha. Alpha coefficients for clinical scales fell between the range of 0.53 (ANT) to 0.88 (ANX), with three scales (ANT, ALC, and DRG) falling below satisfactory (<0.70). Alpha coefficients were unsatisfactory for validity, treatment, and interpersonal scales. Absolute differences between mean clinical scale t scores between the full and short-form PAI clinical scales ranged from 0.04 (DEP) to 1.18 (MAN). For an individual, absolute differences in scale t scores between the full- and short-forms ranged from 0 to 30 t scores. On average, an individual varied 3.75 t scores between the PAI full- and short-form across all validity, clinical, interpersonal, and treatment scales. Component structure was similar across the full- and short-forms. Conclusions: Findings are somewhat consistent with previous literature on the PAI-SF as the full- and short-forms had minimal differences and similar psychometric properties. However, caution is warranted for the clinical utility for both forms given the lower alpha coefficients and different structure. Only certain clinical scales appear to have strong psychometric properties.
Collapse
Affiliation(s)
- Megan Udala
- Psychology, The University of British Columbia, Okanagan, Kelowna, BC, Canada
| | - Lisa Ohlhauser
- Psychology, The University of Victoria, Victoria, BC, Canada
| | - McKenzie Campbell
- Psychology, The University of British Columbia, Okanagan, Kelowna, BC, Canada
| | - Annick Langlois
- Psychology, The University of British Columbia, Okanagan, Kelowna, BC, Canada
| | - Damian Leitner
- Psychology, The University of British Columbia, Okanagan, Kelowna, BC, Canada
| | - Maya Libben
- Psychology, The University of British Columbia, Okanagan, Kelowna, BC, Canada
| | - Harry Miller
- Psychology, The University of British Columbia, Okanagan, Kelowna, BC, Canada
| |
Collapse
|
6
|
Whiteside DM, Hunt I, Choate A, Caraher K, Basso MR. Stratified performance on the Test of Memory Malingering (TOMM) is associated with differential responding on the Personality Assessment Inventory (PAI). J Clin Exp Neuropsychol 2019; 42:131-141. [DOI: 10.1080/13803395.2019.1695749] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| | - Isaac Hunt
- Department of Neurology, St. Mary’s Medical Center, Essentia Health, Duluth, MN, USA
| | - Alyssa Choate
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, NY, USA
| |
Collapse
|
7
|
Predictors and Impact of Self-Reported Suboptimal Effort on Estimates of Prevalence of HIV-Associated Neurocognitive Disorders. J Acquir Immune Defic Syndr 2017; 75:203-210. [PMID: 28328547 DOI: 10.1097/qai.0000000000001371] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Prevalence estimates of HIV-associated neurocognitive disorders (HAND) may be inflated. Estimates are determined via cohort studies in which participants may apply suboptimal effort on neurocognitive testing, thereby inflating estimates. Additionally, fluctuating HAND severity over time may be related to inconsistent effort. To address these hypotheses, we characterized effort in the Multicenter AIDS Cohort Study. METHODS After neurocognitive testing, 935 participants (525 HIV- and 410 HIV+) completed the visual analog effort scale (VAES), rating their effort from 0% to 100%. Those with <100% then indicated the reason(s) for suboptimal effort. K-means cluster analysis established 3 groups: high (mean = 97%), moderate (79%), and low effort (51%). Rates of HAND and other characteristics were compared between the groups. Linear regression examined the predictors of VAES score. Data from 57 participants who completed the VAES at 2 visits were analyzed to characterize the longitudinal relationship between effort and HAND severity. RESULTS Fifty-two percent of participants reported suboptimal effort (<100%), with no difference between serostatus groups. Common reasons included "tired" (43%) and "distracted" (36%). The lowest effort group had greater asymptomatic neurocognitive impairment and minor neurocognitive disorder diagnosis (25% and 33%) as compared with the moderate (23% and 15%) and the high (12% and 9%) effort groups. Predictors of suboptimal effort were self-reported memory impairment, African American race, and cocaine use. Change in effort between baseline and follow-up correlated with change in HAND severity. CONCLUSIONS Suboptimal effort seems to inflate estimated HAND prevalence and explain fluctuation of severity over time. A simple modification of study protocols to optimize effort is indicated by the results.
Collapse
|
8
|
Detecting Feigned Attention-Deficit/Hyperactivity Disorder (ADHD): Current Methods and Future Directions. PSYCHOLOGICAL INJURY & LAW 2017. [DOI: 10.1007/s12207-017-9286-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
9
|
Wygant DB, Granacher RP. Assessment of validity and response bias in neuropsychiatric evaluations. NeuroRehabilitation 2015; 36:427-38. [DOI: 10.3233/nre-151231] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
10
|
Clarification or Confusion? A Review of Rogers, Bender, and Johnson’s A Critical Analysis of the MND Criteria for Feigned Cognitive Impairment: Implications for Forensic Practice and Research. PSYCHOLOGICAL INJURY & LAW 2011. [DOI: 10.1007/s12207-011-9106-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|