1
|
Cheng X, Gilmore GC, Lerner AJ, Lee K. Computerized Block Games for Automated Cognitive Assessment: Development and Evaluation Study (Preprint). JMIR Serious Games 2022; 11:e40931. [PMID: 37191993 DOI: 10.2196/40931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 02/17/2023] [Accepted: 03/13/2023] [Indexed: 03/14/2023] Open
Abstract
BACKGROUND Cognitive assessment using tangible objects can measure fine motor and hand-eye coordination skills along with other cognitive domains. Administering such tests is often expensive, labor-intensive, and error prone owing to manual recording and potential subjectivity. Automating the administration and scoring processes can address these difficulties while reducing time and cost. e-Cube is a new vision-based, computerized cognitive assessment tool that integrates computational measures of play complexity and item generators to enable automated and adaptive testing. The e-Cube games use a set of cubes, and the system tracks the movements and locations of these cubes as manipulated by the player. OBJECTIVE The primary objectives of the study were to validate the play complexity measures that form the basis of developing the adaptive assessment system and evaluate the preliminary utility and usability of the e-Cube system as an automated cognitive assessment tool. METHODS This study used 6 e-Cube games, namely, Assembly, Shape-Matching, Sequence-Memory, Spatial-Memory, Path-Tracking, and Maze, each targeting different cognitive domains. In total, 2 versions of the games, the fixed version with predetermined sets of items and the adaptive version using the autonomous item generators, were prepared for comparative evaluation. Enrolled participants (N=80; aged 18-60 years) were divided into 2 groups: 48% (38/80) of the participants in the fixed group and 52% (42/80) in the adaptive group. Each was administered the 6 e-Cube games; 3 subtests of the Wechsler Adult Intelligence Scale, Fourth Edition (WAIS-IV; Block Design, Digit Span, and Matrix Reasoning); and the System Usability Scale (SUS). Statistical analyses at the 95% significance level were applied. RESULTS The play complexity values were correlated with the performance indicators (ie, correctness and completion time). The adaptive e-Cube games were correlated with the WAIS-IV subtests (r=0.49, 95% CI 0.21-0.70; P<.001 for Assembly and Block Design; r=0.34, 95% CI 0.03-0.59; P=.03 for Shape-Matching and Matrix Reasoning; r=0.51, 95% CI 0.24-0.72; P<.001 for Spatial-Memory and Digit Span; r=0.45, 95% CI 0.16-0.67; P=.003 for Path-Tracking and Block Design; and r=0.45, 95% CI 0.16-0.67; P=.003 for Path-Tracking and Matrix Reasoning). The fixed version showed weaker correlations with the WAIS-IV subtests. The e-Cube system showed a low false detection rate (6/5990, 0.1%) and was determined to be usable, with an average SUS score of 86.01 (SD 8.75). CONCLUSIONS The correlations between the play complexity values and performance indicators supported the validity of the play complexity measures. Correlations between the adaptive e-Cube games and the WAIS-IV subtests demonstrated the potential utility of the e-Cube games for cognitive assessment, but a further validation study is needed to confirm this. The low false detection rate and high SUS scores indicated that e-Cube is technically reliable and usable.
Collapse
|
2
|
Harms RL, Ferrari A, Meier IB, Martinkova J, Santus E, Marino N, Cirillo D, Mellino S, Catuara Solarz S, Tarnanas I, Szoeke C, Hort J, Valencia A, Ferretti MT, Seixas A, Santuccione Chadha A. Digital biomarkers and sex impacts in Alzheimer's disease management - potential utility for innovative 3P medicine approach. EPMA J 2022; 13:299-313. [PMID: 35719134 PMCID: PMC9203627 DOI: 10.1007/s13167-022-00284-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 05/10/2022] [Indexed: 11/29/2022]
Abstract
Digital biomarkers are defined as objective, quantifiable physiological and behavioral data that are collected and measured by means of digital devices. Their use has revolutionized clinical research by enabling high-frequency, longitudinal, and sensitive measurements. In the field of neurodegenerative diseases, an example of a digital biomarker-based technology is instrumental activities of daily living (iADL) digital medical application, a predictive biomarker of conversion from mild cognitive impairment (MCI) due to Alzheimer's disease (AD) to dementia due to AD in individuals aged 55 + . Digital biomarkers show promise to transform clinical practice. Nevertheless, their use may be affected by variables such as demographics, genetics, and phenotype. Among these factors, sex is particularly important in Alzheimer's, where men and women present with different symptoms and progression patterns that impact diagnosis. In this study, we explore sex differences in Altoida's digital medical application in a sample of 568 subjects consisting of a clinical dataset (MCI and dementia due to AD) and a healthy population. We found that a biological sex-classifier, built on digital biomarker features captured using Altoida's application, achieved a 75% ROC-AUC (receiver operating characteristic - area under curve) performance in predicting biological sex in healthy individuals, indicating significant differences in neurocognitive performance signatures between males and females. The performance dropped when we applied this classifier to more advanced stages on the AD continuum, including MCI and dementia, suggesting that sex differences might be disease-stage dependent. Our results indicate that neurocognitive performance signatures built on data from digital biomarker features are different between men and women. These results stress the need to integrate traditional approaches to dementia research with digital biomarker technologies and personalized medicine perspectives to achieve more precise predictive diagnostics, targeted prevention, and customized treatment of cognitive decline. Supplementary Information The online version contains supplementary material available at 10.1007/s13167-022-00284-3.
Collapse
Affiliation(s)
| | | | | | - Julie Martinkova
- Women’s Brain Project, Guntershausen, Switzerland
- Memory Clinic, Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - Enrico Santus
- Women’s Brain Project, Guntershausen, Switzerland
- Bayer, NJ USA
| | - Nicola Marino
- Women’s Brain Project, Guntershausen, Switzerland
- Dipartimento Di Scienze Mediche E Chirurgiche, Università Degli Studi Di Foggia, Foggia, Italy
| | - Davide Cirillo
- Women’s Brain Project, Guntershausen, Switzerland
- Barcelona Supercomputing Center, Plaça Eusebi Güell, 1-3, 08034 Barcelona, Spain
| | | | | | - Ioannis Tarnanas
- Altoida Inc., Houston, TX USA
- Global Brain Health Institute, Dublin, Ireland
| | - Cassandra Szoeke
- Women’s Brain Project, Guntershausen, Switzerland
- Centre for Medical Research, Faculty of Medicine, Dentistry and Health Science, University of Melbourne, Melbourne, Australia
| | - Jakub Hort
- Memory Clinic, Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
- International Clinical Research Center, St Anne’s University Hospital Brno, Brno, Czech Republic
| | - Alfonso Valencia
- Barcelona Supercomputing Center, Plaça Eusebi Güell, 1-3, 08034 Barcelona, Spain
- ICREA - Institució Catalana de Recerca I Estudis Avançats, Pg. Lluís Companys 23, 08010 Barcelona, Spain
| | | | - Azizi Seixas
- Department of Psychiatry and Behavioral Sciences, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | | |
Collapse
|
3
|
Diagnostic performance of digital cognitive tests for the identification of MCI and dementia: A systematic review. Ageing Res Rev 2021; 72:101506. [PMID: 34744026 DOI: 10.1016/j.arr.2021.101506] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 09/21/2021] [Accepted: 10/26/2021] [Indexed: 11/21/2022]
Abstract
BACKGROUND The use of digital cognitive tests is getting common nowadays. Older adults or their family members may use online tests for self-screening of dementia. However, the diagnostic performance across different digital tests is still to clarify. The objective of this study was to evaluate the diagnostic performance of digital cognitive tests for MCI and dementia in older adults. METHODS Literature searches were systematically performed in the OVID databases. Validation studies that reported the diagnostic performance of a digital cognitive test for MCI or dementia were included. The main outcome was the diagnostic performance of the digital test for the detection of MCI or dementia. RESULTS A total of 56 studies with 46 digital cognitive tests were included in this study. Most of the digital cognitive tests were shown to have comparable diagnostic performances with the paper-and-pencil tests. Twenty-two digital cognitive tests showed a good diagnostic performance for dementia, with a sensitivity and a specificity over 0.80, such as the Computerized Visuo-Spatial Memory test and Self-Administered Tasks Uncovering Risk of Neurodegeneration. Eleven digital cognitive tests showed a good diagnostic performance for MCI such as the Brain Health Assessment. However, all the digital tests only had a few validation studies to verify their performance. CONCLUSIONS Digital cognitive tests showed good performances for MCI and dementia. The digital test can collect digital data that is far beyond the traditional ways of cognitive tests. Future research is suggested on these new forms of cognitive data for the early detection of MCI and dementia.
Collapse
|
4
|
Cyr AA, Romero K, Galin-Corini L. Web-Based Cognitive Testing of Older Adults in Person Versus at Home: Within-Subjects Comparison Study. JMIR Aging 2021; 4:e23384. [PMID: 33522972 PMCID: PMC8081157 DOI: 10.2196/23384] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 12/11/2020] [Accepted: 12/12/2020] [Indexed: 01/30/2023] Open
Abstract
Background Web-based research allows cognitive psychologists to collect high-quality data from a diverse pool of participants with fewer resources. However, web-based testing presents unique challenges for researchers and clinicians working with aging populations. Older adults may be less familiar with computer usage than their younger peers, leading to differences in performance when completing web-based tasks in their home versus in the laboratory under the supervision of an experimenter. Objective This study aimed to use a within-subjects design to compare the performance of healthy older adults on computerized cognitive tasks completed at home and in the laboratory. Familiarity and attitudes surrounding computer use were also examined. Methods In total, 32 community-dwelling healthy adults aged above 65 years completed computerized versions of the word-color Stroop task, paired associates learning, and verbal and matrix reasoning in 2 testing environments: at home (unsupervised) and in the laboratory (supervised). The paper-and-pencil neuropsychological versions of these tasks were also administered, along with questionnaires examining computer attitudes and familiarity. The order of testing environments was counterbalanced across participants. Results Analyses of variance conducted on scores from the computerized cognitive tasks revealed no significant effect of the testing environment and no correlation with computer familiarity or attitudes. These null effects were confirmed with follow-up Bayesian analyses. Moreover, performance on the computerized tasks correlated positively with performance on their paper-and-pencil equivalents. Conclusions Our findings show comparable performance on computerized cognitive tasks in at-home and laboratory testing environments. These findings have implications for researchers and clinicians wishing to harness web-based testing to collect meaningful data from older adult populations.
Collapse
Affiliation(s)
- Andrée-Ann Cyr
- Department of Psychology, Glendon Campus, York University, Toronto, ON, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laura Galin-Corini
- Department of Psychology, Glendon Campus, York University, Toronto, ON, Canada
| |
Collapse
|
5
|
Mori T, Kikuchi T, Umeda-Kameyama Y, Wada-Isoe K, Kojima S, Kagimura T, Kudoh C, Uchikado H, Ueki A, Yamashita M, Watabe T, Nishimura C, Tsuno N, Ueda T, Akishita M, Nakamura Y. ABC Dementia Scale: A Quick Assessment Tool for Determining Alzheimer's Disease Severity. Dement Geriatr Cogn Dis Extra 2018; 8:85-97. [PMID: 29706985 PMCID: PMC5921188 DOI: 10.1159/000486956] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Accepted: 01/11/2018] [Indexed: 11/19/2022] Open
Abstract
Background In this study, we examined the construct validity, concurrent validity concerning other standard scales, intrarater reliability, and changes in scores at 12 weeks of the previously developed ABC Dementia Scale (ABC-DS), a novel assessment tool for Alzheimer's disease (AD). Methods Data were obtained from 312 patients diagnosed with either AD or mild cognitive impairment. The scores on the ABC-DS and standard scales were compared. Results The 13 items of the ABC-DS are grouped into three domains, and the domain-level scores were highly correlated with the corresponding conventional scales. Statistically significant changes in assessment scores after 12 weeks were observed for the total ABC-DS scores. Conclusion Our results demonstrate the ABC-DS to have good validity and reliability, and its usefulness in busy clinical settings.
Collapse
Affiliation(s)
- Takahiro Mori
- Department of Neuropsychiatry, Kagawa University School of Medicine, Kagawa, Japan
| | - Takashi Kikuchi
- Translational Research Informatics Center, Foundation for Biomedical Research and Innovation, Kobe, Japan
| | - Yumi Umeda-Kameyama
- Department of Geriatric Medicine, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kenji Wada-Isoe
- Division of Neurology, Department of Brain and Neurosciences, Faculty of Medicine, Tottori University, Yonago, Japan
| | - Shinsuke Kojima
- Translational Research Informatics Center, Foundation for Biomedical Research and Innovation, Kobe, Japan
| | - Tatsuo Kagimura
- Translational Research Informatics Center, Foundation for Biomedical Research and Innovation, Kobe, Japan
| | - Chiaki Kudoh
- KUDOH CHIAKI Clinic for Neurosurgery and Neurology, Tokyo, Japan
| | | | - Akinori Ueki
- Ueki Dementia and Geriatric Psychiatry Clinic, Nishinomiya, Japan
| | | | | | | | - Norifumi Tsuno
- Department of Neuropsychiatry, Kagawa University School of Medicine, Kagawa, Japan
| | - Takashi Ueda
- Medical Corporation Koujinkai, Ueda Neurosurgical Clinic, Miyazaki, Japan
| | - Masahiro Akishita
- Department of Geriatric Medicine, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yu Nakamura
- Department of Neuropsychiatry, Kagawa University School of Medicine, Kagawa, Japan
| | | |
Collapse
|
6
|
Dahmen J, Cook D, Fellows R, Schmitter-Edgecombe M. An analysis of a digital variant of the Trail Making Test using machine learning techniques. Technol Health Care 2017; 25:251-264. [PMID: 27886019 PMCID: PMC5384876 DOI: 10.3233/thc-161274] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND The goal of this work is to develop a digital version of a standard cognitive assessment, the Trail Making Test (TMT), and assess its utility. OBJECTIVE This paper introduces a novel digital version of the TMT and introduces a machine learning based approach to assess its capabilities. METHODS Using digital Trail Making Test (dTMT) data collected from (N = 54) older adult participants as feature sets, we use machine learning techniques to analyze the utility of the dTMT and evaluate the insights provided by the digital features. RESULTS Predicted TMT scores correlate well with clinical digital test scores (r = 0.98) and paper time to completion scores (r = 0.65). Predicted TICS exhibited a small correlation with clinically derived TICS scores (r = 0.12 Part A, r = 0.10 Part B). Predicted FAB scores exhibited a small correlation with clinically derived FAB scores (r = 0.13 Part A, r = 0.29 for Part B). Digitally derived features were also used to predict diagnosis (AUC of 0.65). CONCLUSION Our findings indicate that the dTMT is capable of measuring the same aspects of cognition as the paper-based TMT. Furthermore, the dTMT's additional data may be able to help monitor other cognitive processes not captured by the paper-based TMT alone.
Collapse
Affiliation(s)
- Jessamyn Dahmen
- School of Electrical Engineering and Computer Sciences, Washington State University, Pullman, WA, USA
| | - Diane Cook
- School of Electrical Engineering and Computer Sciences, Washington State University, Pullman, WA, USA
| | - Robert Fellows
- Department of Psychology, Washington State University, Pullman, WA, USA
| | | |
Collapse
|
7
|
Wouters H, Van Campen JPCM, Appels BA, Beijnen JH, Zwinderman AH, Van Gool WA, Schmand B. Individualized evaluation of cholinesterase inhibitors effects in dementia with adaptive cognitive testing. Int J Methods Psychiatr Res 2016; 25:190-8. [PMID: 26299847 PMCID: PMC6877216 DOI: 10.1002/mpr.1484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 04/20/2015] [Accepted: 05/05/2015] [Indexed: 11/05/2022] Open
Abstract
Computerized Adaptive Testing (CAT) of cognitive function, selects for every individual patient, only items of appropriate difficulty to estimate his or her level of cognitive impairment. Therefore, CAT has the potential to combine brevity with precision. We retrospectively examined the evaluation of treatment effects of cholinesterase inhibitors by CAT using longitudinal data from 643 patients from a Dutch teaching hospital who were diagnosed with Alzheimer disease or Lewy Body disease. The Cambridge Cognitive Examination (CAMCOG) was administered before treatment initiation and after intervals of six months of treatment. A previously validated CAT was simulated using 47 CAMCOG items. Results demonstrated that the CAT required a median number of 17 items (inter-quartile range 16-20), or a corresponding 64% test reduction, to estimate patients' global cognitive impairment levels. At the same time, intraclass correlations between global cognitive impairment levels as estimated by CAT or based on all 47 CAMCOG items, ranged from 0.93 at baseline to 0.91-0.94 at follow-up measurements. Slightly more people had substantial decline on the original CAMCOG (N = 31/285, 11%) than on the CAT (N = 17/285, 6%). We conclude that CAT saves time, does not lose much precision, and therefore deserves a role in the evaluation of treatment effects in dementia. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Hans Wouters
- Department of Geriatric Medicine, Slotervaart Hospital, Amsterdam, The Netherlands
| | - Jos P C M Van Campen
- Department of Geriatric Medicine, Slotervaart Hospital, Amsterdam, The Netherlands.
| | - Bregje A Appels
- Department of Medical Psychology and Hospital Psychiatry, Slotervaart Hospital, Amsterdam, The Netherlands
| | - Jos H Beijnen
- Department of Pharmacy & Pharmacology, Slotervaart Hospital, Amsterdam, The Netherlands
| | - Aeilko H Zwinderman
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Centre, Amsterdam, The Netherlands
| | - Willem A Van Gool
- Department of Neurology, Academic Medical Centre, Amsterdam, The Netherlands
| | - Ben Schmand
- Department of Neurology, Academic Medical Centre, Amsterdam, The Netherlands.,Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
8
|
Adjorlolo S. Can Teleneuropsychology Help Meet the Neuropsychological Needs of Western Africans? The Case of Ghana. APPLIED NEUROPSYCHOLOGY-ADULT 2015; 22:388-98. [PMID: 25719559 DOI: 10.1080/23279095.2014.949718] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
In Ghana, the services of psychologists, particularly clinical psychologists and neuropsychologists, remain largely inaccessible to a large proportion of those in need. Emphasis has been placed on "physical wellness" even among patients with cognitive and behavioral problems needing psychological attention. The small number of clinical psychologists and neuropsychologists, the deplorable nature of road networks and transport systems, geopolitical factors, and a reliance on the face-to-face method in providing neuropsychological services have further complicated the accessibility problem. One way of expanding and making neuropsychological services available and accessible is through the use of information communication technology to provide these services, and this is often termed teleneuropsychology. Drawing on relevant literature, this article discusses how computerized neurocognitive assessment and videoconferencing could help in rendering clinical neuropsychological services to patients, particularly those in rural, underserved, and disadvantaged areas in Ghana. The article further proposes recommendations on how teleneuropsychology could be made achievable and sustainable in Ghana.
Collapse
Affiliation(s)
- Samuel Adjorlolo
- a Department of Psychology, Faculty of Social Studies , University of Ghana , Legon , Accra , Ghana
| |
Collapse
|
9
|
Abstract
OBJECTIVE This article is a review of computerized tests and batteries used in the cognitive assessment of older adults. METHOD A literature search on Medline followed by cross-referencing yielded a total of 76 citations. RESULTS Seventeen test batteries were identified and categorized according to their scope. Computerized adaptive testing (CAT) and the Cambridge Cognitive Examination CAT battery as well as 3 experimental batteries and an experimental test are discussed in separate sections. All batteries exhibit strengths associated with computerized testing such as standardization of administration, accurate measurement of many variables, automated record keeping, and savings of time and costs. Discriminant validity and test-retest reliability were well documented for most batteries while documentation of other psychometric properties varied. CONCLUSION The large number of available batteries can be beneficial to the clinician or researcher; however, care should be taken in order to choose the correct battery for each application.
Collapse
Affiliation(s)
- Stelios Zygouris
- 3rd Department of Neurology, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Magda Tsolaki
- 3rd Department of Neurology, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|
10
|
McGrory S, Doherty JM, Austin EJ, Starr JM, Shenkin SD. Item response theory analysis of cognitive tests in people with dementia: a systematic review. BMC Psychiatry 2014; 14:47. [PMID: 24552237 PMCID: PMC3931670 DOI: 10.1186/1471-244x-14-47] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2013] [Accepted: 02/13/2014] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Performance on psychometric tests is key to diagnosis and monitoring treatment of dementia. Results are often reported as a total score, but there is additional information in individual items of tests which vary in their difficulty and discriminatory value. Item difficulty refers to an ability level at which the probability of responding correctly is 50%. Discrimination is an index of how well an item can differentiate between patients of varying levels of severity. Item response theory (IRT) analysis can use this information to examine and refine measures of cognitive functioning. This systematic review aimed to identify all published literature which had applied IRT to instruments assessing global cognitive function in people with dementia. METHODS A systematic review was carried out across Medline, Embase, PsychInfo and CINHAL articles. Search terms relating to IRT and dementia were combined to find all IRT analyses of global functioning scales of dementia. RESULTS Of 384 articles identified four studies met inclusion criteria including a total of 2,920 people with dementia from six centers in two countries. These studies used three cognitive tests (MMSE, ADAS-Cog, BIMCT) and three IRT methods (Item Characteristic Curve analysis, Samejima's graded response model, the 2-Parameter Model). Memory items were most difficult. Naming the date in the MMSE and memory items, specifically word recall, of the ADAS-cog were most discriminatory. CONCLUSIONS Four published studies were identified which used IRT on global cognitive tests in people with dementia. This technique increased the interpretative power of the cognitive scales, and could be used to provide clinicians with key items from a larger test battery which would have high predictive value. There is need for further studies using IRT in a wider range of tests involving people with dementia of different etiology and severity.
Collapse
Affiliation(s)
- Sarah McGrory
- Alzheimer Scotland Dementia Research Centre, University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK.
| | | | | | - John M Starr
- Alzheimer Scotland Dementia Research Centre, University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK,Geriatric Medicine, University of Edinburgh, Edinburgh, UK,Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, UK
| | - Susan D Shenkin
- Geriatric Medicine, University of Edinburgh, Edinburgh, UK,Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
11
|
Nikolaus S, Bode C, Taal E, vd Laar MAFJ. Selection of items for a computer-adaptive test to measure fatigue in patients with rheumatoid arthritis: a Delphi approach. Qual Life Res 2012; 21:863-72. [PMID: 21805365 DOI: 10.1007/s11136-011-9982-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/18/2011] [Indexed: 11/26/2022]
Abstract
PURPOSE Computer-adaptive tests (CATs) can measure precisely at individual level with few items selected from an item bank. Our aim was to select fatigue items to develop a CAT for rheumatoid arthritis (RA) and include expert opinions that are important for content validity of measurement instruments. METHODS Items were included from existing fatigue questionnaires and generated from interview material. In a Delphi procedure, rheumatologists, nurses, and patients evaluated the initial pool of 294 items. Items were selected for the CAT development if rated as adequate by at least 80% of the participants (when 50% or less agreed, they were excluded). Remaining items were adjusted based on participants' comments and re-evaluated in the next round. The procedure stopped when all items were selected or rejected. RESULTS A total of 10 rheumatologists, 20 nurses, and 15 rheumatoid arthritis patients participated. After the first round, 96 of 294 items were directly selected. Nine items were directly excluded, and remaining items were adjusted. In the second round, 124 items were presented for re-evaluation. Ultimately, 245 items were selected. CONCLUSION This study revealed a qualitatively evaluated item pool to be used for the item bank/CAT development. The Delphi procedure is a beneficial approach to select adequate items for measuring fatigue in RA.
Collapse
Affiliation(s)
- Stephanie Nikolaus
- IBR Research Institute for Social Sciences and Technology, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands.
| | | | | | | |
Collapse
|
12
|
Bauer RM, Iverson GL, Cernich AN, Binder LM, Ruff RM, Naugle RI. Computerized neuropsychological assessment devices: joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. Clin Neuropsychol 2012; 26:177-96. [PMID: 22394228 PMCID: PMC3847815 DOI: 10.1080/13854046.2012.663001] [Citation(s) in RCA: 107] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
This joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology sets forth our position on appropriate standards and conventions for computerized neuropsychological assessment devices (CNADs). In this paper, we first define CNADs and distinguish them from examiner-administered neuropsychological instruments. We then set forth position statements on eight key issues relevant to the development and use of CNADs in the healthcare setting. These statements address (a) device marketing and performance claims made by developers of CNADs; (b) issues involved in appropriate end-users for administration and interpretation of CNADs; (c) technical (hardware/software/firmware) issues; (d) privacy, data security, identity verification, and testing environment; (e) psychometric development issues, especially reliability and validity; (f) cultural, experiential, and disability factors affecting examinee interaction with CNADs; (g) use of computerized testing and reporting services; and (h) the need for checks on response validity and effort in the CNAD environment. This paper is intended to provide guidance for test developers and users of CNADs that will promote accurate and appropriate use of computerized tests in a way that maximizes clinical utility and minimizes risks of misuse. The positions taken in this paper are put forth with an eye toward balancing the need to make validated CNADs accessible to otherwise underserved patients with the need to ensure that such tests are developed and utilized competently, appropriately, and with due concern for patient welfare and quality of care.
Collapse
|
13
|
Bauer RM, Iverson GL, Cernich AN, Binder LM, Ruff RM, Naugle RI. Computerized neuropsychological assessment devices: joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. Arch Clin Neuropsychol 2012; 27:362-73. [PMID: 22382386 DOI: 10.1093/arclin/acs027] [Citation(s) in RCA: 159] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
This joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology sets forth our position on appropriate standards and conventions for computerized neuropsychological assessment devices (CNADs). In this paper, we first define CNADs and distinguish them from examiner-administered neuropsychological instruments. We then set forth position statements on eight key issues relevant to the development and use of CNADs in the healthcare setting. These statements address (a) device marketing and performance claims made by developers of CNADs; (b) issues involved in appropriate end-users for administration and interpretation of CNADs; (c) technical (hardware/software/firmware) issues; (d) privacy, data security, identity verification, and testing environment; (e) psychometric development issues, especially reliability, and validity; (f) cultural, experiential, and disability factors affecting examinee interaction with CNADs; (g) use of computerized testing and reporting services; and (h) the need for checks on response validity and effort in the CNAD environment. This paper is intended to provide guidance for test developers and users of CNADs that will promote accurate and appropriate use of computerized tests in a way that maximizes clinical utility and minimizes risks of misuse. The positions taken in this paper are put forth with an eye toward balancing the need to make validated CNADs accessible to otherwise underserved patients with the need to ensure that such tests are developed and utilized competently, appropriately, and with due concern for patient welfare and quality of care.
Collapse
|
14
|
Konsztowicz S, Xie H, Higgins J, Mayo N, Koski L. Development of a method for quantifying cognitive ability in the elderly through adaptive test administration. Int Psychogeriatr 2011; 23:1116-23. [PMID: 21457610 DOI: 10.1017/s1041610211000615] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND The field of geriatric medicine has identified a need for an evaluative tool that can rapidly quantify global cognitive ability and accurately monitor change over time in patients with a wide range of impairments. We hypothesized that the development of an adaptive test approach to cognitive measurement would help to meet that need. This study aimed to provide evidence for the interpretability of scores obtained from a novel, adaptive approach to cognitive assessment, called the Geriatric Rapid Adaptive Cognitive Estimate (GRACE) method. METHODS An adaptive method for cognitive assessment was developed using data from 185 patients referred for geriatric cognitive assessment, and pilot tested in an additional 137 patients. Correlations between test scores and between rank orders of patients were computed to examine the reliability and validity of cognitive ability scores obtained by (1) administering test questions out of their usual order, (2) administering only a subset of questions, and (3) administering questions adaptively using simplified selection rules based on the most difficult question passed. RESULTS Cognitive ability scores obtained with the GRACE method correlated highly with the Montreal Cognitive Assessment (MoCA) scores (r = 0.93) and ranked patients similarly in order of ability (r > 0.87). A simplified adaptive testing algorithm for pencil-and-paper assessment demonstrated moderately high correlations with scores obtained from administering the full set of MMSE and MoCA items as well as the MoCA items alone. CONCLUSIONS Scores from the GRACE method can be obtained easily in 5-10 minutes, reducing test burden. The resulting numeric score quantifies cognitive ability, allowing clinicians to compare patients and monitor change in global cognition over time. The adaptive nature of this method allows for evaluation of persons across a broader range of cognitive ability levels than currently available tests.
Collapse
Affiliation(s)
- Susanna Konsztowicz
- Integrated Program in Neuroscience, Department of Neurology and Neurosurgery, McGill University, Canada
| | | | | | | | | |
Collapse
|
15
|
Koski L, Brouillette MJ, Lalonde R, Hello B, Wong E, Tsuchida A, Fellows LK. Computerized testing augments pencil-and-paper tasks in measuring HIV-associated mild cognitive impairment*. HIV Med 2011; 12:472-80. [DOI: 10.1111/j.1468-1293.2010.00910.x] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|