1
|
Boone KB, Vane RP, Victor TL. Critical Review of Recently Published Studies Claiming Long-Term Neurocognitive Abnormalities in Mild Traumatic Brain Injury. Arch Clin Neuropsychol 2025; 40:272-288. [PMID: 39564962 DOI: 10.1093/arclin/acae079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Accepted: 09/10/2024] [Indexed: 11/21/2024] Open
Abstract
Mild traumatic brain injury (mTBI) is the most common claimed personal injury condition for which neuropsychologists are retained as forensic experts in litigation. Therefore, it is critical that experts have accurate information when testifying as to neurocognitive outcome from concussion. Systematic reviews and six meta-analyses from 1997 to 2011 regarding objective neurocognitive outcome from mTBI provide no evidence that concussed individuals do not return to baseline by weeks to months post-injury. In the current manuscript, a critical review was conducted of 21 research studies published since the last meta-analysis in 2011 that have claimed to demonstrate long-term (i.e., ≥12 months post-injury) neurocognitive abnormalities in adults with mTBI. Using seven proposed methodological criteria for research investigating neurocognitive outcome from mTBI, no studies were found to be scientifically adequate. In particular, more than 50% of the 21 studies reporting cognitive dysfunction did not appropriately diagnose mTBI, employ prospective research designs, use standard neuropsychological tests, include appropriate control groups, provide information on motive to feign or use PVTs, or exclude, or adequately consider the impact of, comorbid conditions known to impact neurocognitive scores. We additionally analyzed 15 studies published during the same period that documented no longer term mTBI-related cognitive abnormalities, and demonstrate that they were generally more methodologically robust than the studies purporting to document cognitive dysfunction. The original meta-analytic conclusions remain the most empirically-sound evidence informing our current understanding of favorable outcomes following mTBI.
Collapse
Affiliation(s)
- Kyle B Boone
- Private Practice, Torrance, 24564 Hawthorne Blvd., Suite 208, Torrance, California 90505, USA
| | - Ryan P Vane
- Department of Psychology, California State University, Dominguez Hills, 1000 E. Victoria Street Carson, California 90747, USA
| | - Tara L Victor
- Department of Psychology, California State University, Dominguez Hills, 1000 E. Victoria Street Carson, California 90747, USA
| |
Collapse
|
2
|
Tyson BT, Shahein A, Abeare CA, Baker SD, Kent K, Roth RM, Erdodi LA. Replicating a Meta-Analysis: The Search for the Optimal Word Choice Test Cutoff Continues. Assessment 2023; 30:2476-2490. [PMID: 36752050 DOI: 10.1177/10731911221147043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.
Collapse
Affiliation(s)
| | | | | | | | | | - Robert M Roth
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | |
Collapse
|
3
|
Echemendia RJ, Burma JS, Bruce JM, Davis GA, Giza CC, Guskiewicz KM, Naidu D, Black AM, Broglio S, Kemp S, Patricios JS, Putukian M, Zemek R, Arango-Lasprilla JC, Bailey CM, Brett BL, Didehbani N, Gioia G, Herring SA, Howell D, Master CL, Valovich McLeod TC, Meehan WP, Premji Z, Salmon D, van Ierssel J, Bhathela N, Makdissi M, Walton SR, Kissick J, Pardini J, Schneider KJ. Acute evaluation of sport-related concussion and implications for the Sport Concussion Assessment Tool (SCAT6) for adults, adolescents and children: a systematic review. Br J Sports Med 2023; 57:722-735. [PMID: 37316213 DOI: 10.1136/bjsports-2022-106661] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/25/2023] [Indexed: 06/16/2023]
Abstract
OBJECTIVES To systematically review the scientific literature regarding the acute assessment of sport-related concussion (SRC) and provide recommendations for improving the Sport Concussion Assessment Tool (SCAT6). DATA SOURCES Systematic searches of seven databases from 2001 to 2022 using key words and controlled vocabulary relevant to concussion, sports, SCAT, and acute evaluation. ELIGIBILITY CRITERIA (1) Original research articles, cohort studies, case-control studies, and case series with a sample of >10; (2) ≥80% SRC; and (3) studies using a screening tool/technology to assess SRC acutely (<7 days), and/or studies containing psychometric/normative data for common tools used to assess SRC. DATA EXTRACTION Separate reviews were conducted involving six subdomains: Cognition, Balance/Postural Stability, Oculomotor/Cervical/Vestibular, Emerging Technologies, and Neurological Examination/Autonomic Dysfunction. Paediatric/Child studies were included in each subdomain. Risk of Bias and study quality were rated by coauthors using a modified SIGN (Scottish Intercollegiate Guidelines Network) tool. RESULTS Out of 12 192 articles screened, 612 were included (189 normative data and 423 SRC assessment studies). Of these, 183 focused on cognition, 126 balance/postural stability, 76 oculomotor/cervical/vestibular, 142 emerging technologies, 13 neurological examination/autonomic dysfunction, and 23 paediatric/child SCAT. The SCAT discriminates between concussed and non-concussed athletes within 72 hours of injury with diminishing utility up to 7 days post injury. Ceiling effects were apparent on the 5-word list learning and concentration subtests. More challenging tests, including the 10-word list, were recommended. Test-retest data revealed limitations in temporal stability. Studies primarily originated in North America with scant data on children. CONCLUSION Support exists for using the SCAT within the acute phase of injury. Maximal utility occurs within the first 72 hours and then diminishes up to 7 days after injury. The SCAT has limited utility as a return to play tool beyond 7 days. Empirical data are limited in pre-adolescents, women, sport type, geographical and culturally diverse populations and para athletes. PROSPERO REGISTRATION NUMBER CRD42020154787.
Collapse
Affiliation(s)
- Ruben J Echemendia
- Concussion Care Clinic, University Orthopedics, State College, Pennsylvania, USA
- University of Missouri Kansas City, Kansas City, Missouri, USA
| | - Joel S Burma
- Faculty of Kinesiology, University of Calgary, Calgary, Alberta, Canada
| | - Jared M Bruce
- Biomedical and Health Informatics, University of Missouri - Kansas City, Kansas City, Missouri, USA
| | - Gavin A Davis
- Murdoch Children's Research Institute, Parkville, Victoria, Australia
- Cabrini Health, Malvern, Victoria, Australia
| | - Christopher C Giza
- Neurosurgery, UCLA Steve Tisch BrainSPORT Program, Los Angeles, California, USA
- Pediatrics/Pediatric Neurology, Mattel Children's Hospital UCLA, Los Angeles, California, USA
| | - Kevin M Guskiewicz
- Matthew Gfeller Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Dhiren Naidu
- Medicine, University of Alberta, Edmonton, Alberta, Canada
| | | | - Steven Broglio
- Michigan Concussion Center, University of Michigan, Ann Arbor, Michigan, USA
| | - Simon Kemp
- Sports Medicine, Rugby Football Union, London, UK
| | - Jon S Patricios
- Wits Sport and Health (WiSH), School of Clinical Medicine, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg-Braamfontein, South Africa
| | | | - Roger Zemek
- Children's Hospital of Eastern Ontario Research Institute, Ottawa, Ontario, Canada
- Department of Pediatrics, University of Ottawa, Ottawa, Ontario, Canada
| | | | - Christopher M Bailey
- Neurology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
- Neurology, Case Western Reserve University School of Medicine, Cleveland, Ohio, USA
| | - Benjamin L Brett
- Neurosurgery/ Neurology, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | | | - Gerry Gioia
- Depts of Pediatrics and Psychiatry & Behavioral Sciences, Children's National Health System, Washington, District of Columbia, USA
| | - Stanley A Herring
- Department of Rehabilitation Medicine, Orthopaedics and Sports Medicine, and Neurological Surgery, University of Washington, Seattle, Washington, USA
| | - David Howell
- Orthopedics, Sports Medicine Center, Children's Hospital Colorado, Aurora, Colorado, USA
| | | | - Tamara C Valovich McLeod
- Department of Athletic Training and School of Osteopathic Medicine in Arizona, A.T. Still University, Mesa, Arizona, USA
| | - William P Meehan
- Sports Medicine, Children's Hospital Boston, Boston, Massachusetts, USA
- Emergency Medicine, Children's Hospital Boston, Boston, Massachusetts, USA
| | - Zahra Premji
- Libraries, University of Victoria, Victoria, British Columbia, Canada
| | | | | | - Neil Bhathela
- UCLA Health Steve Tisch BrainSPORT Program, Los Angeles, California, USA
| | - Michael Makdissi
- Florey Institute of Neuroscience and Mental Health - Austin Campus, Heidelberg, Victoria, Australia
- La Trobe Sport and Exercise Medicine Research Centre, Melbourne, Victoria, Australia
| | - Samuel R Walton
- Department of Physical Medicine and Rehabilitation, School of Medicine, Richmond, Virginia, USA
| | - James Kissick
- Dept of Family Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Jamie Pardini
- Departments of Internal Medicine and Neurology, University of Arizona College of Medicine, Phoenix, Arizona, USA
| | - Kathryn J Schneider
- Sport Injury Prevention Research Centre, Faculty of Kinesiology, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
4
|
Messa I, Korcsog K, Abeare C. An updated review of the prevalence of invalid performance on the Immediate Post-Concussion and Cognitive Testing (ImPACT). Clin Neuropsychol 2022; 36:1613-1636. [PMID: 33356881 DOI: 10.1080/13854046.2020.1866676] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: Performance validity assessment is an important component of concussion baseline testing and Immediate Post-Concussion and Cognitive Testing (ImPACT) is the most commonly used test in this setting. A review of invalid performance on ImPACT was published in 2017, focusing largely on the default embedded validity indicator (Default EVI) provided within the test. There has since been a proliferation in research evaluating the classification accuracy of the Default EVI against independently developed, alternative ImPACT-based EVIs, necessitating an updated review. The purpose of this study was to provide an up-to-date review of the prevalence of invalid performance on ImPACT and to examine the relative effectiveness of ImPACT-based EVIs. Method: Literature related to the prevalence of invalid performance on ImPACT and the effectiveness of ImPACT-based EVIs, published between January 2000 and May 2020, was critically reviewed. Results: A total of 23 studies reported prevalence of invalid performance at baseline testing using ImPACT. Six percent of baseline assessments were found to be invalid by the ImPACT's Default EVI, and between 22.31% and 34.99% were flagged by alternative EVIs. Six studies assessed the effectiveness of ImPACT-based EVIs, with the Default EVI correctly identifying experimental malingerers only 60% of the time. Alternative ImPACT-based EVIs identified between 73% and 100% of experimental malingerers. Conclusions: The ImPACT's Default EVI is not sufficiently sensitive, and clinicians should consider alternative indicators when assessing invalid performance. Accordingly, the base rate of invalid performance in athletes at baseline testing is likely well above the 6% previously reported.
Collapse
Affiliation(s)
- Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | | |
Collapse
|
5
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
6
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|
7
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
8
|
Brantuo MA, An K, Biss RK, Ali S, Erdodi LA. Neurocognitive Profiles Associated With Limited English Proficiency in Cognitively Intact Adults. Arch Clin Neuropsychol 2022; 37:1579-1600. [PMID: 35694764 DOI: 10.1093/arclin/acac019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The objective of the present study was to examine the neurocognitive profiles associated with limited English proficiency (LEP). METHOD A brief neuropsychological battery including measures with high (HVM) and low verbal mediation (LVM) was administered to 80 university students: 40 native speakers of English (NSEs) and 40 with LEP. RESULTS Consistent with previous research, individuals with LEP performed more poorly on HVM measures and equivalent to NSEs on LVM measures-with some notable exceptions. CONCLUSIONS Low scores on HVM tests should not be interpreted as evidence of acquired cognitive impairment in individuals with LEP, because these measures may systematically underestimate cognitive ability in this population. These findings have important clinical and educational implications.
Collapse
Affiliation(s)
- Maame A Brantuo
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Kelly An
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Renee K Biss
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| |
Collapse
|
9
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
10
|
Wallace J, Karr JE, Schatz P, Worts P, Covassin T, Iverson GL. The Frequency of Low Scores on ImPACT in Adolescent Student-Athletes: Stratification by Race and Socioeconomic Status Using Multivariate Base Rates. Dev Neuropsychol 2022; 47:125-135. [PMID: 35133232 DOI: 10.1080/87565641.2022.2034827] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
This study examined the associations between the frequency of low scores on the Immediate Post-Concussion Assessment and Cognitive Test (ImPACT) by race and socioeconomic status (SES), using the proxy of Title I school status, among adolescent student-athletes and calculated multivariate base rates. There were 753 participants assigned to groups based on race (White: n = 430, 59.8%; Black: n = 289, 40.2%) and SES. Black student-athletes obtained more low neurocognitive test scores, which was associated with lower SES. The current study offers a resource to clinicians involved in concussion management who may wish to consider race and SES when interpreting ImPACT test performances.
Collapse
Affiliation(s)
- Jessica Wallace
- Department of Health Science, University of Alabama, Tuscaloosa, Alabama, USA
| | - Justin E Karr
- Department of Psychology, University of Kentucky, Lexington, Kentucky, USA
| | - Philip Schatz
- Department of Psychology, Saint Joseph's University, Philadelphia, Pennsylvania, USA
| | - Phillip Worts
- Clinical Research Director, Tallahassee Orthopedic Clinic, Department of Nutrition, Food and Exercise Sciences Florida State, University Institute of Sports Sciences & Medicine Tallahassee, Tallahassee, Florida, USA
| | - Tracey Covassin
- Department of Kinesiology, Michigan State University, East Lansing, Michigan, USA
| | - Grant L Iverson
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Spaulding Rehabilitation Hospital and Spaulding Research Institute, MassGeneral Hospital for Children Sports Concussion Program; & Home Base, a Red Sox Foundation and Massachusetts General Hospital Program, Charlestown Navy Yard, Charlestown, Massachusetts, USA
| |
Collapse
|
11
|
Introducing the ImPACT-5: An Empirically Derived Multivariate Validity Composite. J Head Trauma Rehabil 2021; 36:103-113. [PMID: 32472832 DOI: 10.1097/htr.0000000000000576] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
OBJECTIVE To create novel Immediate Post-Concussion and Cognitive Testing (ImPACT)-based embedded validity indicators (EVIs) and to compare the classification accuracy to 4 existing EVIImPACT. METHOD The ImPACT was administered to 82 male varsity football players during preseason baseline cognitive testing. The classification accuracy of existing EVIImPACT was compared with a newly developed index (ImPACT-5A and B). The ImPACT-5A represents the number of cutoffs failed on the 5 ImPACT composite scores at a liberal cutoff (0.85 specificity); ImPACT-5B is the sum of failures on conservative cutoffs (≥0.90 specificity). RESULTS ImPACT-5A ≥1 was sensitive (0.81), but not specific (0.49) to invalid performance, consistent with EVIImPACT developed by independent researchers (0.68 sensitivity at 0.73-0.75 specificity). Conversely, ImPACT-5B ≥3 was highly specific (0.98), but insensitive (0.22), similar to Default EVIImPACT (0.04 sensitivity at 1.00 specificity). ImPACT-5A ≥3 or ImPACT-5B ≥2 met forensic standards of specificity (0.91-0.93) at 0.33 to 0.37 sensitivity. Also, the ImPACT-5s had the strongest linear relationship with clinically meaningful levels of invalid performance of existing EVIImPACT. CONCLUSIONS The ImPACT-5s were superior to the standard EVIImPACT and comparable to existing aftermarket EVIImPACT, with the flexibility to optimize the detection model for either sensitivity or specificity. The wide range of ImPACT-5 cutoffs allows for a more nuanced clinical interpretation.
Collapse
|
12
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
13
|
Messa I, Holcomb M, Lichtenstein JD, Tyson BT, Roth RM, Erdodi LA. They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability. AUST J FORENSIC SCI 2021. [DOI: 10.1080/00450618.2020.1865457] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | | | - Brad T Tyson
- Neuropsychological Service, EvergreenHealth Medical Center, Kirkland, WA, USA
| | - Robert M Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
14
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
15
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
16
|
Hardin KY. Prospective Exploration of Cognitive-Communication Changes With Woodcock-Johnson IV Before and After Sport-Related Concussion. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2021; 30:894-907. [PMID: 33784181 PMCID: PMC8702850 DOI: 10.1044/2020_ajslp-20-00110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 09/13/2020] [Accepted: 12/21/2020] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to evaluate changes in cognitive-communication performance using Woodcock-Johnson IV Tests (WJIV) from pre-injury baseline to post sport-related concussion. It was hypothesized that individual subtest performances would decrease postinjury in symptomatic individuals. Method This prospective longitudinal observational nested cohort study of collegiate athletes assessed cognitive-communicative performance at preseason baseline and postinjury. Three hundred and forty-two male and female undergraduates at high risk for sport-related concussion participated in preseason assessments, and 18 individuals met criteria post injury. WJIV subtest domains included Word Finding, Speeded Reading Comprehension, Auditory Comprehension, Verbal Working Memory, Story Retell, and Visual Processing (letter and number). The power calculation was not met, and therefore data were conservatively analyzed with descriptive statistics and a planned subgroup analysis based on symptomatology. Results Individual changes from baseline to postinjury were evaluated using differences in standard score performance. For symptomatic individuals, mean negative decreases in performance were found for Retrieval Fluency, Sentence Reading Fluency, Pattern Matchings, and all cluster scores postinjury. Individual performance declines also included decreases in story retell, verbal working memory, and visual processing. Conclusions This study identified within-subject WJIV performance decline in communication domains post sport-related concussion and reinforces that cognitive-communication dysfunction should be considered in mild traumatic brain injury. Key cognitive-communication areas included speeded naming, reading, and verbal memory, though oral comprehension was not sensitive to change. Future clinical research across diverse populations is needed to expand these preliminary findings.
Collapse
Affiliation(s)
- Kathryn Y. Hardin
- Department of Speech, Language, and Hearing Sciences, University of Colorado Boulder
| |
Collapse
|
17
|
Carvalho LDF, Reis A, Colombarolli MS, Pasian SR, Miguel FK, Erdodi LA, Viglione DJ, Giromini L. Discriminating Feigned from Credible PTSD Symptoms: a Validation of a Brazilian Version of the Inventory of Problems-29 (IOP-29). PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09403-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
18
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
19
|
Gegner J, Erdodi LA, Giromini L, Viglione DJ, Bosi J, Brusadelli E. An Australian study on feigned mTBI using the Inventory of Problems - 29 (IOP-29), its Memory Module (IOP-M), and the Rey Fifteen Item Test (FIT). APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1221-1230. [PMID: 33403885 DOI: 10.1080/23279095.2020.1864375] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated the classification accuracy of the Inventory of Problems - 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample (N = 275). One third of the participants (n = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners (n = 90) were coached to avoid detection by not exaggerating, half were not (n = 92). All measures successfully discriminated between honest responders and feigners, with large effect sizes (d ≥ 1.96). The effect size for the IOP-29 (d ≥ 4.90), however, was about two-to-three times larger than those produced by the IOP-M and FIT. Also noteworthy, the IOP-29 and IOP-M showed excellent sensitivity (>90% the former, > 80% the latter), in both the coached and uncoached feigning conditions, at perfect specificity. Instead, the sensitivity of the FIT was 71.7% within the uncoached simulator group and 53.3% within the coached simulator group, at a nearly perfect specificity of 98.9%. These findings suggest that the validity of the IOP-29 and IOP-M should generalize to Australian examinees and that the IOP-29 and IOP-M likely outperform the FIT in the detection of feigned mTBI.
Collapse
Affiliation(s)
- Jennifer Gegner
- Department of Psychology, University of Wollongong, Wollongong, Australia
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | | | | | | |
Collapse
|
20
|
Chase D, Slicer K, Schatz P. Relationship between Standalone Performance Validity Test Failure and Emotionality among Youth/student Athletes Experiencing Prolonged Recovery following Sports-related Concussion. Dev Neuropsychol 2020; 45:435-445. [PMID: 33269627 DOI: 10.1080/87565641.2020.1852239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
This study documented the rate of Performance Validity Testing (PVT) failure in 81 youth athletes, aged 10-21 years, experiencing prolonged recovery following sports-related concussion, and the relationship between PVT and emotional symptoms. Neuropsychological assessments were conducted across three test sessions with a stand-alone PVT at each session. Results showed that 48% (39/81) of individuals failed at least one PVT, with an overall PVT failure rate of 26% (64/243). Those failing at least one PVT scored significantly higher on anxiety but not depression or somatization. Results illustrate the importance of including measures of emotional and behavioral functioning in testing following SRC.
Collapse
Affiliation(s)
| | - Kayley Slicer
- Department of Psychology, Saint Joseph's University , Philadelphia, PA, USA
| | - Philip Schatz
- Department of Psychology, Saint Joseph's University , Philadelphia, PA, USA
| |
Collapse
|
21
|
Erdodi LA, Abeare CA. Stronger Together: The Wechsler Adult Intelligence Scale-Fourth Edition as a Multivariate Performance Validity Test in Patients with Traumatic Brain Injury. Arch Clin Neuropsychol 2020; 35:188-204. [PMID: 31696203 DOI: 10.1093/arclin/acz032] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 06/18/2019] [Accepted: 06/22/2019] [Indexed: 12/17/2022] Open
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of a multivariate model of performance validity assessment using embedded validity indicators (EVIs) within the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). METHOD Archival data were collected from 100 adults with traumatic brain injury (TBI) consecutively referred for neuropsychological assessment in a clinical setting. The classification accuracy of previously published individual EVIs nested within the WAIS-IV and a composite measure based on six independent EVIs were evaluated against psychometrically defined non-credible performance. RESULTS Univariate validity cutoffs based on age-corrected scaled scores on Coding, Symbol Search, Digit Span, Letter-Number-Sequencing, Vocabulary minus Digit Span, and Coding minus Symbol Search were strong predictors of psychometrically defined non-credible responding. Failing ≥3 of these six EVIs at the liberal cutoff improved specificity (.91-.95) over univariate cutoffs (.78-.93). Conversely, failing ≥2 EVIs at the more conservative cutoff increased and stabilized sensitivity (.43-.67) compared to univariate cutoffs (.11-.63) while maintaining consistently high specificity (.93-.95). CONCLUSIONS In addition to being a widely used test of cognitive functioning, the WAIS-IV can also function as a measure of performance validity. Consistent with previous research, combining information from multiple EVIs enhanced the classification accuracy of individual cutoffs and provided more stable parameter estimates. If the current findings are replicated in larger, diagnostically and demographically heterogeneous samples, the WAIS-IV has the potential to become a powerful multivariate model of performance validity assessment. BRIEF SUMMARY Using a combination of multiple performance validity indicators embedded within the subtests of theWechsler Adult Intelligence Scale, the credibility of the response set can be establishedwith a high level of confidence. Multivariatemodels improve classification accuracy over individual tests. Relying on existing test data is a cost-effective approach to performance validity assessment.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
22
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
23
|
Giromini L, Viglione DJ, Zennaro A, Maffei A, Erdodi LA. SVT Meets PVT: Development and Initial Validation of the Inventory of Problems – Memory (IOP-M). PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09385-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
24
|
Fallows RR, Mullane A, Smith Watts AK, Aukerman D, Bao Y. Normal variability within a collegiate athlete sample: A rationale for comprehensive baseline testing. Clin Neuropsychol 2020; 35:1258-1274. [PMID: 32191157 DOI: 10.1080/13854046.2020.1740325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
OBJECTIVE Sport-related concussions continue to garner attention as research emerges about the effects of these injuries. Many have advocated for cognitive baselines; however, there is no uniform practice of neuropsychological baseline testing at the collegiate level leading to variance in administration and interpretation. Continuing clarification on best practices is essential for the field, especially considering previous research highlighting normal variability on cognitive tests in other populations, but also the need for separate normative sources for those with attention and learning problems. This study aimed to evaluate the range of normal variability in a diverse sample of collegiate athletes administered a traditional neuropsychological baseline. METHOD Neuropsychological baseline measures were collected on 236 Division 1 University student athletes over 4 years. Frequency of scores that fell at 1, 1.5, and 2 or greater standard deviations were reviewed. Student athletes were further evaluated for likelihood of factors which could impact results (i.e. Attention-Deficit/Hyperactivity Disorder [ADHD], Specific Learning Disorder [SLD], and psychiatric distress). RESULTS The results demonstrated high rates of variability in most test scores for the collective sample. Student athletes at risk for ADHD, SLD, and/or psychiatric distress appeared to demonstrate a higher degree of variability relative to individuals with minimal risk. CONCLUSION Baseline evaluation data revealed the presence of normal variability in a student athlete population. Left unrecognized, this can lead to errors in clinical recommendations given the nature of concussion. Certain individuals have risk factors which may increase the range of variability, and this should be explored further in future research.
Collapse
Affiliation(s)
- Robert R Fallows
- Department of Neuropsychology, Samaritan Health Services, Albany, OR, USA
| | - Audrina Mullane
- Department of Neuropsychology, Samaritan Health Services, Albany, OR, USA
| | | | - Douglas Aukerman
- Department of Neuropsychology, Samaritan Health Services, Albany, OR, USA
| | - Yuqin Bao
- Department of Neuropsychology, Samaritan Health Services, Albany, OR, USA
| |
Collapse
|
25
|
Hurtubise J, Baher T, Messa I, Cutler L, Shahein A, Hastings M, Carignan-Querqui M, Erdodi LA. Verbal fluency and digit span variables as performance validity indicators in experimentally induced malingering and real world patients with TBI. APPLIED NEUROPSYCHOLOGY-CHILD 2020; 9:337-354. [DOI: 10.1080/21622965.2020.1719409] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
| | - Tabarak Baher
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | | | | | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
26
|
Teague AM, Hirst RB. Rey 15 item test plus recognition trial and TOMM in a community pediatric sample. APPLIED NEUROPSYCHOLOGY-CHILD 2020; 9:329-336. [PMID: 31918597 DOI: 10.1080/21622965.2019.1709068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
In pediatric evaluations, performance validity test (PVT) selection is often constrained by reading level, developmental appropriateness of stimuli, and administration time. The Rey 15 Item Test (FIT) addresses these constraints, and ranks among the most frequently used PVTs. Unfortunately, research indicates poor sensitivity of the FIT recall trial. Boone et al. developed a FIT recognition trial and demonstrated in an adult sample that its use increased sensitivity while maintaining high specificity. These results are promising, but, to the authors' knowledge, have only been replicated once in a pediatric sample. The present study examined the FIT plus recognition trial in a sample of 72 young athletes ages 8-16 years. All data for the present study were collected during baseline cognitive evaluations. The Test of Memory Malingering (TOMM) was used as the comparison criterion. Receiver operating characteristic curve analyses showed the addition of the recognition trial did not substantially improve sensitivity of the FIT. There was a surprising lack of concordance between TOMM and FIT scores, and, whereas the FIT correlated with multiple cognitive measures, the TOMM did not correlate with any other measures. Results suggest the FIT is not appropriate for pediatric clinical care, even with the additional recognition trial.
Collapse
Affiliation(s)
- Anna M Teague
- Neuropsychology Program, Palo Alto University, Palo Alto, CA, USA
| | - Rayna B Hirst
- Neuropsychology Program, Palo Alto University, Palo Alto, CA, USA
| |
Collapse
|
27
|
Chovaz CJ, Rennison VLA, Chorostecki DO. The validity of the test of memory malingering (TOMM) with deaf individuals. Clin Neuropsychol 2019; 35:597-614. [PMID: 31797722 DOI: 10.1080/13854046.2019.1696408] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
OBJECTIVE Administration of performance validity tests (PVT) during neuropsychological assessments is standard practice, with the Test of Memory Malingering (TOMM) being a commonly used measure. The TOMM has been well validated in hearing populations with various medical and psychiatric backgrounds. A major gap in the literature is the use of the TOMM amongst culturally Deaf individuals who use American Sign Language (ASL) as their first and preferred language. The purpose of this study was to explore the use of the TOMM with this population to determine if there may be differences related to the use of semantic knowledge and recall using signs rather than spoken phonemes. METHOD This study recruited 30 culturally Deaf, community-dwelling adults, who self-reported that they were not involved in litigation ordisability claims. In addition to the TOMM, participants were screened for cognitive ability using non-verbal components of the Wechsler Abbreviated Scale of Intelligence, Second Edition (WASI-II) and the Mini Mental State Examination: ASL Version (MMSE:ASL). RESULTS Nonverbal intelligence for this sample was within the average range of ability. No participants scored lower than the standard cut-off score for Trial 2 or the Retention Trial on the TOMM (≤44 raw score to indicate invalid responding). Trial 1 performances ranged from 44 to 50, Trial 2 performances ranged from 49 to 50, and Retention performances ranged from 49 to 50. CONCLUSION These results support the use of the same standard cut-off scores established for hearing individuals in culturally Deaf individuals who use ASL.
Collapse
Affiliation(s)
- Cathy J Chovaz
- Psychology Department, King's University College at Western University, London, Canada
| | | | | |
Collapse
|
28
|
Geographic Variation and Instrumentation Artifacts: in Search of Confounds in Performance Validity Assessment in Adults with Mild TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09354-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
29
|
Erdodi LA, Taylor B, Sabelli AG, Malleck M, Kirsch NL, Abeare CA. Demographically Adjusted Validity Cutoffs on the Finger Tapping Test Are Superior to Raw Score Cutoffs in Adults with TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09352-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|