1
|
Bigler ED, Allder S, Dunkley BT, Victoroff J. What traditional neuropsychological assessment got wrong about mild traumatic brain injury. IV: clinical applications and future directions. Brain Inj 2025:1-17. [PMID: 40181291 DOI: 10.1080/02699052.2025.2486462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 02/24/2025] [Accepted: 03/24/2025] [Indexed: 04/05/2025]
Abstract
PRIMARY OBJECTIVE Part IV concludes this four-part review of 'What Traditional Neuropsychological Assessment Got Wrong About Mild Traumatic Brain Injury,' with a focus on clinical applications and future directions. METHODS AND PROCEDURES These reviews have highlighted the limitations of traditional neuropsychological assessment methods, particularly in the evaluation of the patient with mild traumatic brain injury (mTBI), and especially within the context of all of the 21st Century advances in neuroimaging, quantification and network neuroscience. MAIN OUTCOME AND RESULTS How advanced neuroimaging technology and contemporary network neuroscience can be applied to assessing the mTBI patient at this time along with neuroimaging of the future are reviewed. The current status of computerized neuropsychological test (CNT) development is reviewed as it applies to mTBI assessment. Likewise, how the future of various types of virtual reality (VR), artificial intelligence (AI), wearable sensors, and markerless gaming technology could enhance the mTBI CNT assessment tool box of the future is reviewed. CONCLUSIONS The review concludes with some aspirational statements about how improvements along with novel CNT methods could be developed and integrated with advanced neuroimaging technologies in the future to be tailored to meet the needs of the mTBI patient.
Collapse
Affiliation(s)
- Erin D Bigler
- Department of Psychology and Neuroscience Center, Brigham Young University, Provo, Utah, USA
- Departments of Neurology and Psychiatry, University of Utah, Salt Lake City, Utah, USA
| | - Steven Allder
- Consultant Neurologist and Clinical Director, Re: Cognition Health, London, UK
| | | | - Jeff Victoroff
- Department of Neurology, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
2
|
Craig SN, Dempster M, Curran D, Cuddihy AM, Lyttle N. A systematic review of the effectiveness of digital cognitive assessments of cognitive impairment in Parkinson's disease. APPLIED NEUROPSYCHOLOGY. ADULT 2025:1-13. [PMID: 39891618 DOI: 10.1080/23279095.2025.2454983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2025]
Abstract
Background: Digitalization in healthcare has been extended to how we examine and manage Parkinson's Disease Mild Cognitive Impairment (PD-MCI). Methods: Moyer Population (those with PD and in some cases control groups), Intervention (digital cognitive test) and Outcome (validity and reliability) (PIO) and Campbell et al. Synthesis Without Meta-analysis (SWiM) methods were employed. A literature search of MEDLINE, PsycINFO, CINAHL, OpenGrey, and ProQuest Theses and Dissertations Sources screened for articles. Results: The digital trail-making test (dTMT) was the most used measure. There was strong validity between the dTMT and pencil-paper TMT, Mini-Mental State Examination (MMSE), and Montreal Cognitive Assessment (MoCA) scores (ranging from r = .55 to .90, p < .001). Validity between the TMT pencil-paper and digital versions were adequate (ranging from r = .51 to 90, p < .001). Reliability was demonstrated between PD and control groups' scores (ranging from r = .71 to .87). One study found excellent inter-rater reliability (ICC = .90 to .95). The dMoCA was the most used screen that assessed more than two cognitive domains. There was a range in the strength of agreement between digital and pencil-paper versions (ICC scores = .37 to .83) and only one study demonstrated adequate validity (r = .59, p < .001). Poor internal consistency (α = .54) and poor test re-test reliability (between PD and control groups' scores, p > .05) were found. Conclusion: This review found that digitalized cognitive tests are valid and reliable methods to assess PD-MCI. Considerations for future research are discussed.
Collapse
Affiliation(s)
- Saskia N Craig
- Department of Clinical Psychology, Queen's University, Belfast, Northern Ireland
| | - Martin Dempster
- Department of Psychology Applied to Health & Illness, Queen's University, Belfast, Northern Ireland
| | - David Curran
- Department of Clinical Psychology, Queen's University, Belfast, Northern Ireland
| | - Aoife M Cuddihy
- Department of Clinical Psychology, Queen's University, Belfast, Northern Ireland
| | - Nigel Lyttle
- Department of Clinical Neuropsychology, Royal Victoria Hospital Belfast, Belfast, Northern Ireland
| |
Collapse
|
3
|
Jokinen H, Laakso HM, Arola A, Paajanen TI, Virkkala J, Särkämö T, Makkonen T, Kyläheiko I, Heinonen H, Pitkänen J, Korvenoja A, Melkas S. Executive functions and processing speed in covert cerebral small vessel disease. Eur J Neurol 2025; 32:e16533. [PMID: 39475227 PMCID: PMC11622512 DOI: 10.1111/ene.16533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 08/16/2024] [Accepted: 10/09/2024] [Indexed: 12/07/2024]
Abstract
BACKGROUND AND PURPOSE Executive dysfunction and slowed processing speed are central cognitive impairments in cerebral small vessel disease (cSVD). It is unclear whether the subcomponents of executive functions become equally affected and whether computerized tests are more sensitive in detecting early cognitive changes over traditional tests. The associations of specific executive abilities (cognitive flexibility, inhibitory control, working memory) and processing speed with white matter hyperintensities (WMHs) and Instrumental Activities of Daily Living (IADL) were examined. METHODS In the Helsinki Small Vessel Disease Study, 152 older individuals without stroke or dementia were assessed with brain magnetic resonance imaging and comprehensive neuropsychological evaluation. WMH volumes were obtained with automated segmentation. Executive functions and processing speed measures included established paper-and-pencil tests and the computer-based Flexible Attention Test (FAT), Simon task and Sustained Attention to Response Task. RESULTS White matter hyperintensity volume and IADL were associated with multiple cognitive measures across subdomains independently of demographic factors. The highest effect sizes were observed for FAT numbers and number-letter tasks (tablet modifications from the Trail Making Test), FAT visuospatial span, Simon task and semantic verbal fluency. Some of the widely used tests such as Stroop inhibition, phonemic fluency and digit span were not significantly associated with either WMHs or IADL. CONCLUSION Processing speed and executive function subcomponents are broadly related to functional abilities and WMH severity in covert cSVD, but the strength of associations within subdomains is heavily dependent on the assessment method. Digital tests providing precise measures of reaction times and response accuracy seem to outperform many of the conventional paper-and-pencil tests.
Collapse
Affiliation(s)
- Hanna Jokinen
- Division of Neuropsychology, NeurocenterHelsinki University Hospital and University of HelsinkiHelsinkiFinland
- Department of Psychology, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
| | - Hanna M. Laakso
- Department of Psychology, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
| | - Anne Arola
- Department of Psychology, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
| | - Teemu I. Paajanen
- Work Ability and Working Careers UnitFinnish Institute of Occupational HealthHelsinkiFinland
| | - Jussi Virkkala
- Department of NeurophysiologyHelsinki University Hospital and University of HelsinkiHelsinkiFinland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
| | - Tommi Makkonen
- Department of Psychology, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
- Cognitive Brain Research Unit, Department of Psychology, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
| | - Iiris Kyläheiko
- Department of Psychology, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
| | - Heidi Heinonen
- Division of Neuropsychology, NeurocenterHelsinki University Hospital and University of HelsinkiHelsinkiFinland
| | - Johanna Pitkänen
- Department of NeurologyHelsinki University Hospital and University of HelsinkiHelsinkiFinland
| | - Antti Korvenoja
- Department of RadiologyHelsinki University Hospital and University of HelsinkiHelsinkiFinland
| | - Susanna Melkas
- Department of NeurologyHelsinki University Hospital and University of HelsinkiHelsinkiFinland
| |
Collapse
|
4
|
Klaming L, Spaltman M, Vermeent S, van Elswijk G, Miller JB, Schmand B. Test-retest reliability and reliable change index of the Philips IntelliSpace Cognition digital test battery. Clin Neuropsychol 2024; 38:1707-1725. [PMID: 38360593 DOI: 10.1080/13854046.2024.2315747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 01/11/2024] [Indexed: 02/17/2024]
Abstract
OBJECTIVE This article provides the test-retest reliability and Reliable Change Indices (RCIs) of the Philips IntelliSpace Cognition (ISC) platform, which contains digitized versions of well-established neuropsychological tests. METHOD 147 participants (ages 19 to 88) completed a digital cognitive test battery on the ISC platform or paper-pencil versions of the same test battery during two separate visits. Intraclass correlation coefficients (ICC) were calculated separately for the ISC and analog test versions to compare reliabilities between administration modalities. RCIs were calculated for the digital tests using the practice-adjusted RCI and standardized regression-based (SRB) method. RESULTS Test-retest reliabilities for the ISC tests ranged from moderate to excellent and were comparable to the test-retest reliabilities for the paper-pencil tests. Baseline test performance, retest interval, age, and education predicted test performance at visit 2 with baseline test performance being the strongest predictor for all outcome measures. For most outcome measures, both methods for the calculation of RCIs show agreement on whether or not a reliable change was observed. CONCLUSIONS RCIs for the digital tests enable clinicians to determine whether a measured change between assessments is due to real improvement or decline. Together, this contributes to the growing evidence for the clinical utility of the ISC platform.
Collapse
Affiliation(s)
- Laura Klaming
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, The Netherlands
| | - Mandy Spaltman
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, The Netherlands
| | - Stefan Vermeent
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, The Netherlands
| | - Gijs van Elswijk
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, The Netherlands
| | - Justin B Miller
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, USA
| | - Ben Schmand
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, The Netherlands
| |
Collapse
|
5
|
Muner LC, Rodrigues JDC, Becker N. Adaptation of the Cognitive Screening Test (Triagem Cognitiva - TRIACOG) for computer-mediated assessments: TRIACOG-Online. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-9. [PMID: 39227317 DOI: 10.1080/23279095.2024.2398118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Abstract
This study aims to present the adaptation, evidence of content validity and results of a pilot study of the Cognitive Screening Test - Online (TRIACOG-Online) in a clinical sample of patients after stroke. The process comprised four stages: 1) Adaptation of the instructions, stimulus and responses; 2) Seven experts analyzed the equivalence between the previous printed version and the online version; 3) A pilot study was carried out with seven adults who had experienced a stroke in order to check the comprehension and feasibility of the items; and 4) The development of the final version of TRIACOG-Online. Expert validity testing of the questionnaire yielded a content validity index (CVI) of 100% for correspondence and construct in 13 items, and a CVI of 87.71% in four items. In the pilot study, problems related to the internet led to the decision to use a single section form. No difficulties were observed in carrying out the tasks and understanding the instructions. Participants reported being able to adequately visualize the stimuli and remain motivated to complete the tasks presented. It was shown that TRIACOG-Online evaluated the same constructs as the pencil-and-paper version, can be used in remote neuropsychological assessments and face-to-face settings.
Collapse
|
6
|
Harris C, Tang Y, Birnbaum E, Cherian C, Mendhe D, Chen MH. Digital Neuropsychology beyond Computerized Cognitive Assessment: Applications of Novel Digital Technologies. Arch Clin Neuropsychol 2024; 39:290-304. [PMID: 38520381 PMCID: PMC11485276 DOI: 10.1093/arclin/acae016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 02/16/2024] [Indexed: 03/25/2024] Open
Abstract
Compared with other health disciplines, there is a stagnation in technological innovation in the field of clinical neuropsychology. Traditional paper-and-pencil tests have a number of shortcomings, such as low-frequency data collection and limitations in ecological validity. While computerized cognitive assessment may help overcome some of these issues, current computerized paradigms do not address the majority of these limitations. In this paper, we review recent literature on the applications of novel digital health approaches, including ecological momentary assessment, smartphone-based assessment and sensors, wearable devices, passive driving sensors, smart homes, voice biomarkers, and electronic health record mining, in neurological populations. We describe how each digital tool may be applied to neurologic care and overcome limitations of traditional neuropsychological assessment. Ethical considerations, limitations of current research, as well as our proposed future of neuropsychological practice are also discussed.
Collapse
Affiliation(s)
- Che Harris
- Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA
- Department of Neurology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ, USA
| | - Yingfei Tang
- Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA
- Department of Neurology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ, USA
| | - Eliana Birnbaum
- Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA
| | - Christine Cherian
- Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA
| | - Dinesh Mendhe
- Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA
| | - Michelle H Chen
- Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA
- Department of Neurology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ, USA
| |
Collapse
|
7
|
Stoll SEM, Bauer I, Hopfer K, Lamberty J, Lunz V, Guzmán Bausch F, Höflacher C, Kroliczak G, Kalénine S, Randerath J. Diagnosing homo digitalis: towards a standardized assessment for digital tool competencies. Front Psychol 2024; 14:1270437. [PMID: 38239458 PMCID: PMC10794727 DOI: 10.3389/fpsyg.2023.1270437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 11/28/2023] [Indexed: 01/22/2024] Open
Abstract
Introduction In the 21st century, digital devices have become integral to our daily lives. Still, practical assessments designed to evaluate an individual's digital tool competencies are absent. The present study introduces the "Digital Tools Test" ("DIGI"), specifically designed for the evaluation of one's proficiency in handling common applications and functions of smartphones and tablets. The DIGI assessment has been primarily tailored for prospective use among older adults and neurological patients with the latter frequently suffering from so-called apraxia, which potentially also affects the handling of digital tools. Similar to traditional tool use tests that assess tool-selection and tool-action processes, the DIGI assessment evaluates an individual's ability to select an appropriate application for a given task (e.g., creating a new contact), their capacity to navigate within the chosen application and their competence in executing precise and accurate movements, such as swiping. Methods We tested the implementation of the DIGI in a group of 16 healthy adults aged 18 to 28 years and 16 healthy adults aged 60 to 74 years. All participants were able to withstand the assessment and reported good acceptance. Results The results revealed a significant performance disparity, with older adults displaying notably lower proficiency in the DIGI. The DIGI performance of older adults exhibited a correlation with their ability to employ a set of novel mechanical tools, but not with their ability to handle a set of familiar common tools. There was no such correlation for the younger group. Conclusion In conclusion, this study introduces an innovative assessment tool aimed at evaluating common digital tool competencies. Our preliminary results demonstrate good acceptance and reveal expected group differences. For current cohorts of older adults, the results seem to indicate that the ability to use novel tools may aid digital tool use. In the next step, the psychometric properties of the DIGI assessment should be evaluated in larger and more diverse samples. The advancement of digital tool competency assessments and rehabilitation strategies is essential when we aim at facilitating societal inclusion and participation for individuals in affected populations.
Collapse
Affiliation(s)
- Sarah E. M. Stoll
- Department of Psychology, University of Konstanz, Konstanz, Germany
- Lurija Institute for Rehabilitation Science and Health Research, Kliniken Schmieder, Allensbach, Germany
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Isabel Bauer
- Department of Psychology, University of Konstanz, Konstanz, Germany
- Lurija Institute for Rehabilitation Science and Health Research, Kliniken Schmieder, Allensbach, Germany
| | - Karen Hopfer
- Department of Psychology, University of Konstanz, Konstanz, Germany
| | - Judith Lamberty
- Department of Psychology, University of Konstanz, Konstanz, Germany
| | - Verena Lunz
- Department of Psychology, University of Konstanz, Konstanz, Germany
| | | | - Cosima Höflacher
- Department of Psychology, University of Konstanz, Konstanz, Germany
| | - Gregory Kroliczak
- Cognitive Neuroscience Center, Action and Cognition Laboratory, Faculty of Psychology and Cognitive Science, Adam Mickiewicz University, Poznan, Poland
- Department of Clinical Neuropsychology, Nicolaus Copernicus University in Toruń Collegium Medicum, Bydgoszcz, Poland
| | - Solène Kalénine
- Sciences Cognitives Et Sciences Affectives, University of Lille, Lille, France
| | - Jennifer Randerath
- Department of Psychology, University of Konstanz, Konstanz, Germany
- Lurija Institute for Rehabilitation Science and Health Research, Kliniken Schmieder, Allensbach, Germany
- Outpatient Unit for Research, Teaching, and Practice, Faculty of Psychology, University of Vienna, Vienna, Austria
| |
Collapse
|
8
|
Serra-Blasco M, Souto-Sampera A, Medina JC, Flix-Valle A, Ciria-Suarez L, Arizu-Onassis A, Ruiz-Romeo M, Jansen F, Rodríguez A, Pernas S, Ochoa-Arnedo C. Cognitive-enhanced eHealth psychosocial stepped intervention for managing breast cancer-related cognitive impairment: Protocol for a randomized controlled trial. Digit Health 2024; 10:20552076241257082. [PMID: 39070895 PMCID: PMC11273701 DOI: 10.1177/20552076241257082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/08/2024] [Indexed: 07/30/2024] Open
Abstract
Introduction Breast cancer often leads to cancer-related cognitive impairment (CRCI), which includes both objective and subjective cognitive deficits. While psychosocial interventions benefit quality of life and distress reduction, their impact on cognitive deficits is uncertain. This study evaluates the integration of a cognitive module into a digital psychosocial intervention for breast cancer patients. Methods In this randomized controlled trial (RCT), 88 recently diagnosed breast cancer (BC) patients will receive the ICOnnecta't program (control group) - a digital stepped intervention addressing a variety of psychosocial needs. The experimental group (n = 88) will receive ICOnnecta't plus a cognitive module. Assessments at baseline, 3, 6, and 12 months will measure the interventions' impact on cognition, emotional distress, medication adherence, quality of life, post-traumatic stress, work functioning and healthcare experience. Feasibility and cost-utility analyses will also be conducted. Results The cognitive module includes three levels. The first level contains a cognitive screening using FACT-Cog Perceived Cognitive Impairment (PCI). Patients with PCI <54 progress to a cognitive psychoeducational campus (Level 2) with content on cognitive education, behavioural strategies and mindfulness. Patients with persistent or worsened PCI (≥6) after 3 months move to Level 3, an online cognitive training through CogniFit software delivered twice a week over 12 weeks. Conclusions This study assesses whether integrating a cognitive module into a digital psychosocial intervention improves objective and subjective cognition in breast cancer patients. Secondary outcomes explore cognitive improvement's impact on psychosocial variables. The research will contribute to testing efficacious approaches for detecting and addressing cognitive dysfunction in breast cancer patients. Trial registration ClinicalTrials.gov, NCT06103318. Registered 26 October 2023, https://classic.clinicaltrials.gov/ct2/show/NCT06103318?term=serra-blasco&draw=2&rank=4.
Collapse
Affiliation(s)
- Maria Serra-Blasco
- ICOnnecta’t Digital Health Program, Catalan Institute of Oncology, Hospitalet del Llobregat, Spain
- The Bellvitge Biomedical Research Institute IDIBELL, Psychooncology and Digital Health Group, Hospitalet del Llobregat, Spain
- Mental Health Networking Biomedical Research Centre (CIBERSAM), Carlos III Health Institute, Barcelona, Spain
| | - Arnau Souto-Sampera
- ICOnnecta’t Digital Health Program, Catalan Institute of Oncology, Hospitalet del Llobregat, Spain
- The Bellvitge Biomedical Research Institute IDIBELL, Psychooncology and Digital Health Group, Hospitalet del Llobregat, Spain
- Department of Clinical Psychology and Psychobiology, Universitat de Barcelona, Barcelona, Spain
| | - Joan C. Medina
- The Bellvitge Biomedical Research Institute IDIBELL, Psychooncology and Digital Health Group, Hospitalet del Llobregat, Spain
- Department of Psychology and Education Sciences, Universitat Oberta de Catalunya, Barcelona, Spain
| | - Aida Flix-Valle
- ICOnnecta’t Digital Health Program, Catalan Institute of Oncology, Hospitalet del Llobregat, Spain
- The Bellvitge Biomedical Research Institute IDIBELL, Psychooncology and Digital Health Group, Hospitalet del Llobregat, Spain
- Department of Clinical Psychology and Psychobiology, Universitat de Barcelona, Barcelona, Spain
| | - Laura Ciria-Suarez
- ICOnnecta’t Digital Health Program, Catalan Institute of Oncology, Hospitalet del Llobregat, Spain
- The Bellvitge Biomedical Research Institute IDIBELL, Psychooncology and Digital Health Group, Hospitalet del Llobregat, Spain
| | - Alejandra Arizu-Onassis
- The Bellvitge Biomedical Research Institute IDIBELL, Psychooncology and Digital Health Group, Hospitalet del Llobregat, Spain
- Department of Clinical Psychology and Psychobiology, Universitat de Barcelona, Barcelona, Spain
| | - Marina Ruiz-Romeo
- The Bellvitge Biomedical Research Institute IDIBELL, Psychooncology and Digital Health Group, Hospitalet del Llobregat, Spain
- Department of Clinical Psychology and Psychobiology, Universitat de Barcelona, Barcelona, Spain
| | - Femke Jansen
- Department of Otolaryngology-Head and Neck Surgery, Amsterdam UMC, VUmc Cancer Center Amsterdam, Amsterdam, The Netherlands
- Cancer Center Amsterdam (CCA), Amsterdam UMC, Amsterdam, The Netherlands
| | - Ana Rodríguez
- Breast Cancer Functional Unit, Catalan Institute of Oncology, Hospitalet del Llobregat, Spain
| | - Sonia Pernas
- Breast Cancer Functional Unit, Catalan Institute of Oncology, Hospitalet del Llobregat, Spain
| | - Cristian Ochoa-Arnedo
- ICOnnecta’t Digital Health Program, Catalan Institute of Oncology, Hospitalet del Llobregat, Spain
- The Bellvitge Biomedical Research Institute IDIBELL, Psychooncology and Digital Health Group, Hospitalet del Llobregat, Spain
- Department of Clinical Psychology and Psychobiology, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
9
|
Christianson K, Prabhu M, Popp ZT, Rahman MS, Drane J, Lee M, Lathan C, Lin H, Au R, Sunderaraman P, Hwang PH. Adherence type impacts completion rates of frequent mobile cognitive assessments among older adults with and without cognitive impairment. RESEARCH SQUARE 2023:rs.3.rs-3350075. [PMID: 37841867 PMCID: PMC10571616 DOI: 10.21203/rs.3.rs-3350075/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
Background Prior to a diagnosis of Alzheimer's disease, many individuals experience cognitive and behavioral fluctuations that are not detected during a single session of traditional neuropsychological assessment. Mobile applications now enable high-frequency cognitive data to be collected remotely, introducing new opportunities and challenges. Emerging evidence suggests cognitively impaired older adults are capable of completing mobile assessments frequently, but no study has observed whether completion rates vary by assessment frequency or adherence type. Methods Thirty-three older adults were recruited from the Boston University Alzheimer's Disease Research Center (mean age = 73.5 years; 27.3% cognitively impaired; 57.6% female; 81.8% White, 18.2% Black). Participants remotely downloaded and completed the DANA Brain Vital application on their own mobile devices throughout the study. The study schedule included seventeen assessments to be completed over the course of a year. Specific periods during which assessments were expected to be completed were defined as subsegments, while segments consisted of multiple subsegments. The first segment included three subsegments to be completed within one week, the second segment included weekly subsegments and spanned three weeks, and the third and fourth segments included monthly subsegments spanning five and six months, respectively. Three distinct adherence types - subsegment adherence, segment adherence, and cumulative adherence - were examined to determine how completion rates varied depending on assessment frequency and adherence type. Results Adherence type significantly impacted whether the completion rates declined. When utilizing subsegment adherence, the completion rate significantly declined (p = 0.05) during the fourth segment. However, when considering completion rates from the perspective of segment adherence, a decline in completion rate was not observed. Overall adherence rates increased as adherence parameters were broadened from subsegment adherence (60.6%) to segment adherence (78.8%), to cumulative adherence (90.9%). Conclusions Older adults, including those with cognitive impairment, are able to complete remote cognitive assessments at a high-frequency, but may not necessarily adhere to prescribed schedules.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Rhoda Au
- Boston University School of Medicine
| | | | | |
Collapse
|
10
|
Chen L, Zhen W, Peng D. Research on digital tool in cognitive assessment: a bibliometric analysis. Front Psychiatry 2023; 14:1227261. [PMID: 37680449 PMCID: PMC10482043 DOI: 10.3389/fpsyt.2023.1227261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 07/31/2023] [Indexed: 09/09/2023] Open
Abstract
Objective The number of research into new cognitive assessment tools has increased rapidly in recent years, sparking great interest among professionals. However, there is still little literature revealing the current status and future trends of digital technology use in cognitive assessment. The aim of this study was to summarize the development of digital cognitive assessment tools through the bibliometric method. Methods We carried out a comprehensive search in the Web of Science Core Collection to identify relevant papers published in English between January 1, 2003, and April 3, 2023. We used the subjects such as "digital," "computer," and "cognitive," and finally 13,244 related publications were collected. Then we conducted the bibliometric analysis by Bibliometrix" R-package, VOSviewer and CiteSpace software, revealing the prominent countries, authors, institutions, and journals. Results 11,045 articles and 2,199 reviews were included in our analyzes. The number of annual publications in this field was rising rapidly. The results showed that the most productive countries, authors and institutions were primarily located in economically developed regions, especially the North American, European, and Australian countries. Research cooperation tended to occur in these areas as well. The application of digital technology in cognitive assessment appealed to growing attention during the outbreak of the COVID-19 epidemic. Conclusion Digital technology uses have had a great impact on cognitive assessment and health care. There have been substantial papers published in these areas in recent years. The findings of the study indicate the great potential of digital technology in cognitive assessment.
Collapse
Affiliation(s)
- Leian Chen
- China-Japan Friendship Hospital (Institute of Clinical Medical Sciences), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
- Department of Neurology, China-Japan Friendship Hospital, Beijing, China
| | - Weizhe Zhen
- Department of Neurology, China-Japan Friendship Hospital, Beijing, China
- Graduate School, Beijing University of Chinese Medicine, Beijing, China
| | - Dantao Peng
- China-Japan Friendship Hospital (Institute of Clinical Medical Sciences), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
- Department of Neurology, China-Japan Friendship Hospital, Beijing, China
- Graduate School, Beijing University of Chinese Medicine, Beijing, China
| |
Collapse
|
11
|
Vermeent S, Spaltman M, van Elswijk G, Miller JB, Schmand B. Philips IntelliSpace Cognition digital test battery: Equivalence and measurement invariance compared to traditional analog test versions. Clin Neuropsychol 2022; 36:2278-2299. [PMID: 34528868 DOI: 10.1080/13854046.2021.1974565] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: To collect evidence of validity for a selection of digital tests on the Philips IntelliSpace Cognition (ISC) platform.Method: A total of 200 healthy participants (age 50-80) completed both the ISC battery and an analog version of the battery during separate visits. The battery included the following screeners and cognitive tests: Mini-Mental State Examination (2nd edition), Clock Drawing Test, Trail-Making Test (TMT), Rey Auditory Verbal Learning Test (RAVLT), Rey-Osterrieth Complex Figure Test (ROCFT), Letter Fluency, Star Cancellation Test, and Digit Span Test. The ISC tests were administered on an iPad Pro and were automatically scored using designated algorithms. The analog tests were administered in line with existing guidelines and scored by trained neuropsychologists. Criterion validity was established through relative agreement coefficients and raw score equivalence tests. In addition,measurement invariance analysis was used to compare the factor structures of both versions. Finally, we explored effects of demographics and experience with digital devices on performance.Results: We found fair to excellent relative agreement between test versions. Absolute equivalence was found for RAVLT, Letter Fluency, Star Cancellation Test, and Digit Span Test. Importantly, we demonstrated equal loadings of the digital and analog test versions on the same set of underlying cognitive domains. Demographic effects were mostly comparable between modalities, and people's experience with digital devices was found to only influence performance on TMT B.Conclusions: This study provides several sources of evidence for the validity of the ISC test battery, offering an important step in validating ISC for clinical use.Supplemental data for this article is available online at https://doi.org/10.1080/13854046.2021.1974565.
Collapse
Affiliation(s)
- Stefan Vermeent
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, The Netherlands
| | - Mandy Spaltman
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, The Netherlands
| | - Gijs van Elswijk
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, The Netherlands
| | - Justin B Miller
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV
| | - Ben Schmand
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, The Netherlands
| |
Collapse
|
12
|
[Revue systématique des tests cognitifs validés et/ou ayant des normes de référence pour la population canadienne francophone âgée]. Can J Aging 2022; 42:297-315. [PMID: 36120908 DOI: 10.1017/s0714980822000319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Il est essentiel d'utiliser des tests cognitifs ayant été validés et détenant des normes de référence auprès de la population cible, puisque les réalités culturelles et linguistiques différentes entre l'échantillon de validation ou auprès duquel les normes ont été créées et la population cible peuvent affecter les résultats. Cette revue systématique vise à recenser et décrire les tests cognitifs (incluant tests, questionnaires et grilles d'observation) validés et/ou présentant des normes sur la population âgée canadienne francophone. Au total, 46 articles ont été sélectionnés. Cette revue recense 9 tests validés, 20 tests avec normes de référence et 18 tests validés et avec normes, couvrant la majorité des domaines cognitifs (fonctions mnésiques, attentionnelles, exécutives, perceptivo-motrices et langagières), excepté la cognition sociale. La quasi-totalité des échantillons ont été recrutés au Québec. Les tests relevés présentent majoritairement des indices psychométriques satisfaisants et généralement des normes considérant l'âge, le sexe et l'éducation. Cette revue systématique permettra aux cliniciens et chercheurs canadiens en vieillissement d'orienter optimalement leurs choix de tests cognitifs.
Collapse
|
13
|
Marques-Costa C, Simões MR, Almiro PA, Prieto G, Salomé Pinho M. Integrating Technology in Neuropsychological Assessment. EUROPEAN PSYCHOLOGIST 2022. [DOI: 10.1027/1016-9040/a000484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Although neuropsychological assessments include some measures that are administered, scored, or interpreted using new technologies, most researchers in this area advocate that more technology should be integrated. The current situation in neuropsychological assessment may be conceptualized as triggering a crisis leading to a paradigm shift, as there is some resistance to adopting more technology. In this paper, the context of the present crisis in neuropsychological assessment, the main obstacles, and new developments will be discussed. An example of a new computerized assessment tool, the NIH Toolbox, is highlighted. Also addressed are potential issues: in the assessment with tablets illustrating it with the older adult population and how to ensure the compatibility of data collected through these devices within the framework of the European General Data Protection Regulation (GDPR). Recommendations for research, test development, and clinical practice are also provided.
Collapse
Affiliation(s)
- Catarina Marques-Costa
- Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
- Center for Research in Neuropsychology and Cognitive and Behavioral Intervention (CINEICC), University of Coimbra, Portugal
- Psychological Assessment and Psychometrics Laboratory (PsyAssessmentLab), University of Coimbra, Portugal
| | - Mário R. Simões
- Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
- Center for Research in Neuropsychology and Cognitive and Behavioral Intervention (CINEICC), University of Coimbra, Portugal
- Psychological Assessment and Psychometrics Laboratory (PsyAssessmentLab), University of Coimbra, Portugal
| | - Pedro A. Almiro
- Center for Research in Neuropsychology and Cognitive and Behavioral Intervention (CINEICC), University of Coimbra, Portugal
- Psychological Assessment and Psychometrics Laboratory (PsyAssessmentLab), University of Coimbra, Portugal
- Research Centre for Psychology (CIP), Autonomous University Lisbon, Portugal
| | - Gerardo Prieto
- Psychological Assessment and Psychometrics Laboratory (PsyAssessmentLab), University of Coimbra, Portugal
- Faculty of Psychology, University of Salamanca, Spain
| | - Maria Salomé Pinho
- Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
- Center for Research in Neuropsychology and Cognitive and Behavioral Intervention (CINEICC), University of Coimbra, Portugal
- Memory, Language, and Executive Functions Laboratory, University of Coimbra, Portugal
| |
Collapse
|
14
|
Van Patten R. Introduction to the Special Issue - Neuropsychology from a distance: Psychometric properties and clinical utility of remote neurocognitive tests. J Clin Exp Neuropsychol 2022; 43:767-773. [PMID: 35133240 DOI: 10.1080/13803395.2021.2021645] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Ryan Van Patten
- Department of Psychiatry and Human Behavior, Alpert Medical School of Brown, University, Providence, RI, USA.,Providence VA Medical Center, Providence, RI, USA
| |
Collapse
|
15
|
Leong V, Raheel K, Sim JY, Kacker K, Karlaftis VM, Vassiliu C, Kalaivanan K, Chen SHA, Robbins TW, Sahakian BJ, Kourtzi Z. A New Remote Guided Method for Supervised Web-Based Cognitive Testing to Ensure High-Quality Data: Development and Usability Study. J Med Internet Res 2022; 24:e28368. [PMID: 34989691 PMCID: PMC8778570 DOI: 10.2196/28368] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 06/25/2021] [Accepted: 07/27/2021] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND The global COVID-19 pandemic has triggered a fundamental reexamination of how human psychological research can be conducted safely and robustly in a new era of digital working and physical distancing. Online web-based testing has risen to the forefront as a promising solution for the rapid mass collection of cognitive data without requiring human contact. However, a long-standing debate exists over the data quality and validity of web-based studies. This study examines the opportunities and challenges afforded by the societal shift toward web-based testing and highlights an urgent need to establish a standard data quality assurance framework for online studies. OBJECTIVE This study aims to develop and validate a new supervised online testing methodology, remote guided testing (RGT). METHODS A total of 85 healthy young adults were tested on 10 cognitive tasks assessing executive functioning (flexibility, memory, and inhibition) and learning. Tasks were administered either face-to-face in the laboratory (n=41) or online using remote guided testing (n=44) and delivered using identical web-based platforms (Cambridge Neuropsychological Test Automated Battery, Inquisit, and i-ABC). Data quality was assessed using detailed trial-level measures (missed trials, outlying and excluded responses, and response times) and overall task performance measures. RESULTS The results indicated that, across all data quality and performance measures, RGT data was statistically-equivalent to in-person data collected in the lab (P>.40 for all comparisons). Moreover, RGT participants out-performed the lab group on measured verbal intelligence (P<.001), which could reflect test environment differences, including possible effects of mask-wearing on communication. CONCLUSIONS These data suggest that the RGT methodology could help ameliorate concerns regarding online data quality-particularly for studies involving high-risk or rare cohorts-and offer an alternative for collecting high-quality human cognitive data without requiring in-person physical attendance.
Collapse
Affiliation(s)
- Victoria Leong
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Kausar Raheel
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Jia Yi Sim
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Kriti Kacker
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Vasilis M Karlaftis
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Chrysoula Vassiliu
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | - Kastoori Kalaivanan
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore, Singapore
| | - S H Annabel Chen
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- National Institute of Education, Nanyang Technological University, Singapore, Singapore
| | - Trevor W Robbins
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- Behavioural and Clinical Neuroscience Institute, University of Cambridge, Cambridge, United Kingdom
| | - Barbara J Sahakian
- Behavioural and Clinical Neuroscience Institute, University of Cambridge, Cambridge, United Kingdom
- Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom
| | - Zoe Kourtzi
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
16
|
Wilson S, Milanini B, Javandel S, Nyamayaro P, Valcour V. Validity of Digital Assessments in Screening for HIV-Related Cognitive Impairment: a Review. Curr HIV/AIDS Rep 2021; 18:581-592. [PMID: 34820750 PMCID: PMC8612826 DOI: 10.1007/s11904-021-00585-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/13/2021] [Indexed: 10/31/2022]
Abstract
PURPOSE OF REVIEW While traditional neuropsychological tests are the gold standard in screening for HIV-related cognitive impairment, computerized neuropsychological assessment devices (CNADs) offer an alternative to these time- and resource-intensive batteries and may prove to be particularly useful for remote assessments or longitudinal monitoring. This review seeks to describe the benefits, limitations, and validity of CNADs in the evaluation of HIV-associated neurocognitive disorder (HAND). RECENT FINDINGS We identified eight CNADs that have undergone validity testing for cognitive impairment in the setting of HIV. Included among these are batteries that have been modeled after the traditional neuropsychological exam, as well as others that implement new technologies, such as simulated reality and daily ecological assessments in their testing. Currently, these digital batteries do not yet have the ability to supplant gold standard neuropsychological tests in screening for HAND. However, many have the potential to become effective clinical screening tools.
Collapse
Affiliation(s)
- Samuel Wilson
- Memory and Aging Center, Department of Neurology, University of California, San Francisco, 675 Nelson Rising Lane, Suite 190, San Francisco, CA, 94158, USA
| | - Benedetta Milanini
- Memory and Aging Center, Department of Neurology, University of California, San Francisco, 675 Nelson Rising Lane, Suite 190, San Francisco, CA, 94158, USA
| | - Shireen Javandel
- Memory and Aging Center, Department of Neurology, University of California, San Francisco, 675 Nelson Rising Lane, Suite 190, San Francisco, CA, 94158, USA
- Global Brain Health Institute, University of California, San Francisco, San Francisco, CA, USA
| | - Primrose Nyamayaro
- Global Brain Health Institute, University of California, San Francisco, San Francisco, CA, USA
- Faculty of Medicine and Health Sciences, University of Zimbabwe, Harare, Zimbabwe
| | - Victor Valcour
- Memory and Aging Center, Department of Neurology, University of California, San Francisco, 675 Nelson Rising Lane, Suite 190, San Francisco, CA, 94158, USA.
- Global Brain Health Institute, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
17
|
Wang T, Thielen H, De Preter E, Vangkilde S, Gillebert CR. Encouraging Digital Technology in Neuropsychology: The Theory of Visual Attention on Tablet Devices. Arch Clin Neuropsychol 2021; 36:1450–1464. [PMID: 33621327 DOI: 10.1093/arclin/acab007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/28/2021] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVE Visual attention helps us to selectively process relevant information and is crucial in our everyday interactions with the environment. Not surprisingly, it is one of the cognitive domains that is most frequently affected by acquired brain injury. Reliable assessment of attention deficits is pivotal to neuropsychological examination and helps to optimize individual rehabilitation plans. Compared with conventional pen-and-paper tests, computerized tasks borrowed from the field of experimental psychology bring many benefits, but lab-based experimental setups cannot be easily incorporated in clinical practice. Light-weight and portable mobile tablet devices may facilitate the translation of computerized tasks to clinical settings. One such task is based on the Theory of Visual Attention (TVA), a mathematical model of visual attention. TVA-based paradigms have been widely used to investigate several aspects of visual attention in both fundamental and clinical research, and include measures for general processing capacity as well as stimulus-specific attentional parameters. METHODS This article discusses the benefits of TVA-based assessments compared with frequently used neuropsychological tests of visual attention, and examines the reliability of a tablet-based TVA-based assessment in 59 neurologically healthy participants. RESULTS Pearson's correlations indicate that the tablet-based TVA assessment and the conventional lab-based TVA assessment have a comparable parallel-form (range: .67-.93), test-retest (range: .61-.78), and internal reliability (range: .56-.97). CONCLUSION Our results suggest that tablet-based TVA assessment may be a promising tool to acquire clinical measures of visual attention at low cost at the bedside of the patient.
Collapse
Affiliation(s)
- Tianlu Wang
- Department of Brain and Cognition, KU Leuven, Leuven, Belgium
| | - Hella Thielen
- Department of Brain and Cognition, KU Leuven, Leuven, Belgium
| | - Erik De Preter
- Department of Brain and Cognition, KU Leuven, Leuven, Belgium
| | - Signe Vangkilde
- Department of Psychology, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
18
|
Pitteri M, Dapor C, Ziccardi S, Guandalini M, Meggiato R, Calabrese M. A Videogame-Based Approach to Measuring Information Processing Speed in Multiple Sclerosis Patients. Games Health J 2021; 10:115-120. [PMID: 33818136 DOI: 10.1089/g4h.2020.0069] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Objective: Slowing information processing speed (IPS) is a biomarker of neuronal damage in patients with multiple sclerosis (pwMS). A focus on IPS might be the ideal solution in the perspective of promptly detecting cognitive changes over time. We developed a tablet-based home-made videogame to test the sensitivity of this device in measuring subclinical IPS in pwMS. Materials and Methods: Forty-three pwMS without cognitive impairment and 20 healthy controls (HCs) were administered the videogame task with a tablet. Response times (RTs) and accuracy were recorded. Results: PwMS (mean RTs = 505.5 ± 73.9 ms) were significantly slower than HCs (mean RTs = 462.3 ± 40.3 ms, P = 0.014) on the videogame task. A moderate but significant correlation (r = -0.35, P = 0.03) between mean RTs and the Symbol Digit Modalities Test was observed. Conclusion: Our videogame showed good sensitivity in measuring IPS in apparently cognitive normal pwMS. Computerized testing might be useful in screening initial cognitive dysfunction that should be monitored as a marker of underlying disease progression. IRB approval Number is 2332CESC.
Collapse
Affiliation(s)
- Marco Pitteri
- Neurology Section, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Caterina Dapor
- Neurology Section, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Stefano Ziccardi
- Neurology Section, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Maddalena Guandalini
- Neurology Section, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Riccardo Meggiato
- Neurology Section, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Massimiliano Calabrese
- Neurology Section, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| |
Collapse
|
19
|
Aarsland D, Batzu L, Halliday GM, Geurtsen GJ, Ballard C, Ray Chaudhuri K, Weintraub D. Parkinson disease-associated cognitive impairment. Nat Rev Dis Primers 2021; 7:47. [PMID: 34210995 DOI: 10.1038/s41572-021-00280-3] [Citation(s) in RCA: 555] [Impact Index Per Article: 138.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/27/2021] [Indexed: 02/08/2023]
Abstract
Parkinson disease (PD) is the second most common neurodegenerative disorder, affecting >1% of the population ≥65 years of age and with a prevalence set to double by 2030. In addition to the defining motor symptoms of PD, multiple non-motor symptoms occur; among them, cognitive impairment is common and can potentially occur at any disease stage. Cognitive decline is usually slow and insidious, but rapid in some cases. Recently, the focus has been on the early cognitive changes, where executive and visuospatial impairments are typical and can be accompanied by memory impairment, increasing the risk for early progression to dementia. Other risk factors for early progression to dementia include visual hallucinations, older age and biomarker changes such as cortical atrophy, as well as Alzheimer-type changes on functional imaging and in cerebrospinal fluid, and slowing and frequency variation on EEG. However, the mechanisms underlying cognitive decline in PD remain largely unclear. Cortical involvement of Lewy body and Alzheimer-type pathologies are key features, but multiple mechanisms are likely involved. Cholinesterase inhibition is the only high-level evidence-based treatment available, but other pharmacological and non-pharmacological strategies are being tested. Challenges include the identification of disease-modifying therapies as well as finding biomarkers to better predict cognitive decline and identify patients at high risk for early and rapid cognitive impairment.
Collapse
Affiliation(s)
- Dag Aarsland
- Department of Old Age Psychiatry, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK. .,Centre for Age-Related Medicine, Stavanger University Hospital, Stavanger, Norway.
| | - Lucia Batzu
- Parkinson's Foundation Centre of Excellence, King's College Hospital and Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Glenda M Halliday
- Brain and Mind Centre and Faculty of Medicine and Health School of Medical Sciences, University of Sydney, Sydney, New South Wales, Australia
| | - Gert J Geurtsen
- Amsterdam UMC, University of Amsterdam, Department of Medical Psychology, Amsterdam Neuroscience, Amsterdam, The Netherlands
| | | | - K Ray Chaudhuri
- Parkinson's Foundation Centre of Excellence, King's College Hospital and Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Daniel Weintraub
- Departments of Psychiatry and Neurology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA.,Parkinson's Disease Research, Education and Clinical Center (PADRECC), Corporal Michael J. Crescenz Veterans Affairs Medical Center, Philadelphia, PA, USA
| |
Collapse
|
20
|
Cyr AA, Romero K, Galin-Corini L. Web-Based Cognitive Testing of Older Adults in Person Versus at Home: Within-Subjects Comparison Study. JMIR Aging 2021; 4:e23384. [PMID: 33522972 PMCID: PMC8081157 DOI: 10.2196/23384] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 12/11/2020] [Accepted: 12/12/2020] [Indexed: 01/30/2023] Open
Abstract
Background Web-based research allows cognitive psychologists to collect high-quality data from a diverse pool of participants with fewer resources. However, web-based testing presents unique challenges for researchers and clinicians working with aging populations. Older adults may be less familiar with computer usage than their younger peers, leading to differences in performance when completing web-based tasks in their home versus in the laboratory under the supervision of an experimenter. Objective This study aimed to use a within-subjects design to compare the performance of healthy older adults on computerized cognitive tasks completed at home and in the laboratory. Familiarity and attitudes surrounding computer use were also examined. Methods In total, 32 community-dwelling healthy adults aged above 65 years completed computerized versions of the word-color Stroop task, paired associates learning, and verbal and matrix reasoning in 2 testing environments: at home (unsupervised) and in the laboratory (supervised). The paper-and-pencil neuropsychological versions of these tasks were also administered, along with questionnaires examining computer attitudes and familiarity. The order of testing environments was counterbalanced across participants. Results Analyses of variance conducted on scores from the computerized cognitive tasks revealed no significant effect of the testing environment and no correlation with computer familiarity or attitudes. These null effects were confirmed with follow-up Bayesian analyses. Moreover, performance on the computerized tasks correlated positively with performance on their paper-and-pencil equivalents. Conclusions Our findings show comparable performance on computerized cognitive tasks in at-home and laboratory testing environments. These findings have implications for researchers and clinicians wishing to harness web-based testing to collect meaningful data from older adult populations.
Collapse
Affiliation(s)
- Andrée-Ann Cyr
- Department of Psychology, Glendon Campus, York University, Toronto, ON, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laura Galin-Corini
- Department of Psychology, Glendon Campus, York University, Toronto, ON, Canada
| |
Collapse
|
21
|
Young SR. Format Effects of iPad Administration of Wechsler Adult Intelligence Scale-Fourth Edition: Cross-Sectional Evidence for Score Equivalency in Routine Clinical Practice. Arch Clin Neuropsychol 2020; 35:1283-1287. [PMID: 32613223 DOI: 10.1093/arclin/acaa040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 05/18/2020] [Accepted: 06/10/2020] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE The literature lacks independent investigations of the influence of tablet administration of cognitive assessments in applied clinical settings. The present study examined the influence of iPad administration on (Wechsler Adult Intelligence Scale-Fourth Edition) WAIS-IV core subtest scores in a university-based clinic. METHOD Record review was conducted for a convenience sample (N = 66) of university students who were administered the WAIS-IV via iPad or traditional format. Bayesian difference testing was used to evaluate the strength of the evidence for subtest score equivalence across groups. RESULTS Evidence supported score equivalency for the 10 core subtests across administration groups (BF > 3). The one exception was digit span-forward condition, for which equivalence was supported (BF = 2.44), but did not meet cut-off criteria. CONCLUSIONS iPad administration of WAIS-IV is unlikely to influence subtest scores in routine clinical practice with healthy young adults. Further independent research in diverse clinical populations is recommended.
Collapse
Affiliation(s)
- Stephanie Ruth Young
- Department of Educational Psychology, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
22
|
Backx R, Skirrow C, Dente P, Barnett JH, Cormack FK. Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study. J Med Internet Res 2020; 22:e16792. [PMID: 32749999 PMCID: PMC7435628 DOI: 10.2196/16792] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 05/15/2020] [Accepted: 06/11/2020] [Indexed: 12/22/2022] Open
Abstract
Background Computerized assessments are already used to derive accurate and reliable measures of cognitive function. Web-based cognitive assessment could improve the accessibility and flexibility of research and clinical assessment, widen participation, and promote research recruitment while simultaneously reducing costs. However, differences in context may influence task performance. Objective This study aims to determine the comparability of an unsupervised, web-based administration of the Cambridge Neuropsychological Test Automated Battery (CANTAB) against a typical in-person lab-based assessment, using a within-subjects counterbalanced design. The study aims to test (1) reliability, quantifying the relationship between measurements across settings using correlational approaches; (2) equivalence, the extent to which test results in different settings produce similar overall results; and (3) agreement, by quantifying acceptable limits to bias and differences between measurement environments. Methods A total of 51 healthy adults (32 women and 19 men; mean age 36.8, SD 15.6 years) completed 2 testing sessions, which were completed on average 1 week apart (SD 4.5 days). Assessments included equivalent tests of emotion recognition (emotion recognition task [ERT]), visual recognition (pattern recognition memory [PRM]), episodic memory (paired associate learning [PAL]), working memory and spatial planning (spatial working memory [SWM] and one touch stockings of Cambridge), and sustained attention (rapid visual information processing [RVP]). Participants were randomly allocated to one of the two groups, either assessed in-person in the laboratory first (n=33) or with unsupervised web-based assessments on their personal computing systems first (n=18). Performance indices (errors, correct trials, and response sensitivity) and median reaction times were extracted. Intraclass and bivariate correlations examined intersetting reliability, linear mixed models and Bayesian paired sample t tests tested for equivalence, and Bland-Altman plots examined agreement. Results Intraclass correlation (ICC) coefficients ranged from ρ=0.23-0.67, with high correlations in 3 performance indices (from PAL, SWM, and RVP tasks; ρ≥0.60). High ICC values were also seen for reaction time measures from 2 tasks (PRM and ERT tasks; ρ≥0.60). However, reaction times were slower during web-based assessments, which undermined both equivalence and agreement for reaction time measures. Performance indices did not differ between assessment settings and generally showed satisfactory agreement. Conclusions Our findings support the comparability of CANTAB performance indices (errors, correct trials, and response sensitivity) in unsupervised, web-based assessments with in-person and laboratory tests. Reaction times are not as easily translatable from in-person to web-based testing, likely due to variations in computer hardware. The results underline the importance of examining more than one index to ascertain comparability, as high correlations can present in the context of systematic differences, which are a product of differences between measurement environments. Further work is now needed to examine web-based assessments in clinical populations and in larger samples to improve sensitivity for detecting subtler differences between test settings.
Collapse
Affiliation(s)
- Rosa Backx
- Cambridge Cognition Ltd, Cambridge, United Kingdom
| | - Caroline Skirrow
- Cambridge Cognition Ltd, Cambridge, United Kingdom.,School of Psychological Science, University of Bristol, Bristol, United Kingdom
| | | | - Jennifer H Barnett
- Cambridge Cognition Ltd, Cambridge, United Kingdom.,Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom
| | | |
Collapse
|
23
|
Vermeent S, Dotsch R, Schmand B, Klaming L, Miller JB, van Elswijk G. Evidence of Validity for a Newly Developed Digital Cognitive Test Battery. Front Psychol 2020; 11:770. [PMID: 32390918 PMCID: PMC7194127 DOI: 10.3389/fpsyg.2020.00770] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Accepted: 03/30/2020] [Indexed: 01/11/2023] Open
Abstract
Clinical practice still relies heavily on traditional paper-and-pencil testing to assess a patient’s cognitive functions. Digital technology has the potential to be an efficient and powerful alternative, but for many of the existing digital tests and test batteries the psychometric properties have not been properly established. We validated a newly developed digital test battery consisting of digitized versions of conventional neuropsychological tests. Two confirmatory factor analysis models were specified: a model based on traditional neuropsychological theory and expert consensus and one based on the Cattell-Horn-Carroll (CHC) taxonomy. For both models, the outcome measures of the digital tests loaded on the cognitive domains in the same way as established in the neuropsychological literature. Interestingly, no clear distinction could be made between the CHC model and traditional neuropsychological model in terms of model fit. Taken together, these findings provide preliminary evidence for the structural validity of the digital cognitive test battery.
Collapse
Affiliation(s)
- Stefan Vermeent
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, Netherlands
| | - Ron Dotsch
- Department of Brain, Behavior and Cognition, Philips Research, Eindhoven, Netherlands
| | - Ben Schmand
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, Netherlands
| | - Laura Klaming
- Department of Brain, Behavior and Cognition, Philips Research, Eindhoven, Netherlands
| | - Justin B Miller
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, United States
| | - Gijs van Elswijk
- Digital Cognitive Diagnostics, Philips Healthcare, Eindhoven, Netherlands
| |
Collapse
|
24
|
Marasca AR, Yates DB, Schneider AMDA, Feijó LP, Bandeira DR. Avaliação psicológica online: considerações a partir da pandemia do novo coronavírus (COVID-19) para a prática e o ensino no contexto a distância. ESTUDOS DE PSICOLOGIA (CAMPINAS) 2020. [DOI: 10.1590/1982-0275202037e200085] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Resumo As restrições impostas pelo distanciamento social decorrente da pandemia do novo coronavírus exigiram adaptações dos psicólogos a uma nova realidade de trabalho que privilegia atividades remotas. O ensino e a prática em Avaliação Psicológica foram algumas das áreas afetadas, demandando que psicólogos, conselho profissional e sociedades científicas discutam diretrizes para o contexto da pandemia. Também, ainda que exista um aumento de cursos a distância, são apontadas restrições para o ensino de técnicas psicológicas em ambiente online. Visto as mudanças no cenário de trabalho e a necessidade de adaptação à situação atual, este estudo busca discutir a viabilidade de processos de Avaliação Psicológica online e apontar direções para seu aperfeiçoamento. Procura-se também apresentar possibilidades para ensino e supervisão a distância. Discutem-se evidências científicas e regulamentações nacionais e internacionais que embasam essas práticas. Por fim, reforça-se a necessidade do desenvolvimento de tecnologias que permitam conduzir o processo de maneira ética e segura.
Collapse
|
25
|
McKinney TL, Euler MJ, Butner JE. It’s about time: The role of temporal variability in improving assessment of executive functioning. Clin Neuropsychol 2019; 34:619-642. [DOI: 10.1080/13854046.2019.1704434] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Ty L. McKinney
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Matthew J. Euler
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | | |
Collapse
|
26
|
Basic processes as foundations of cognitive impairment in adult ADHD. J Neural Transm (Vienna) 2019; 126:1347-1362. [PMID: 31321549 PMCID: PMC6764934 DOI: 10.1007/s00702-019-02049-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2018] [Accepted: 07/11/2019] [Indexed: 12/23/2022]
Abstract
Attention deficit hyperactivity disorder (ADHD) in adulthood is associated with impairment of multiple aspects of cognition which adversely affect the individual's everyday functioning. However, little is known about how these impairments are intertwined. This study explores whether impairments in basic processes (processing speed and distractibility) in adults with ADHD explain impairments in higher order functions, namely executive functions, memory, and complex attention. Furthermore, it is explored whether pharmacological treatment with methylphenidate (MPH) affects basic processes and higher order functions. A between-subjects design compared patients with ADHD without stimulant drug treatment (N = 55) and patients with ADHD treated with MPH (N = 31) with a healthy control group (N = 80). A neuropsychological test battery assessing basic processes and higher order functions was administered. Hierarchical logistic regression analyses were performed to evaluate the contribution of basic processes to impairments in higher order functions. Patients with ADHD not treated with MPH showed impairments in basic processes and higher order functions compared to controls. The impairments in basic processes explained 41-43% of impairments in executive functions, 27-29% in memory, and 56-74% in complex attention. In patients with ADHD treated with MPH, basic processes were not impaired and did not contribute significantly to impairments of higher order functions. Basic processes may constitute part of the foundation of cognitive impairments in adult ADHD. MPH may improve cognitive performance, presumably through improving basic processes. Applying this information could optimize neuropsychological assessments and inform treatment strategies by targeting basic processes.
Collapse
|
27
|
Marcopulos B, Łojek E. Introduction to the special issue: Are modern neuropsychological assessment methods really "modern"? Reflections on the current neuropsychological test armamentarium. Clin Neuropsychol 2019; 33:187-199. [PMID: 30760098 DOI: 10.1080/13854046.2018.1560502] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE We introduce this special issue which focuses on how advances in neuroscience and technology can modernize and transform clinical neuropsychological assessment. METHOD We included both invited and solicited papers to reflect on the strengths and weaknesses of currently used, standardized neuropsychological tests and to explore how we might incorporate new technologies and neuroscientific advances to modernize neuropsychological assessment methods. RESULTS The papers are organized along the following themes: (1) A critique of the current clinical neuropsychological test armamentarium; (2) A description of new opportunities for collecting neurobehavioral data with technology; (3) Digital science, biomedical big data and the internet; (4) Integrating neuropsychological, neuroimaging, and neurophysiological assessments; (5) Modernization, globalization and culture. CONCLUSION The process of modernizing methods of assessment in clinical neuropsychology is laborious and requires a coordinated, sustained effort among clinicians, researchers, and the test industry. While embracing technology is necessary, we must also be aware of unintended consequences as we navigate this exciting new territory.
Collapse
Affiliation(s)
- Bernice Marcopulos
- a Department of Graduate Psychology , James Madison University , Harrisonburg , VA , USA.,b Department of Psychiatry and Neurobehavioral Sciences , University of Virginia , Charlottesville , VA , USA
| | - Emilia Łojek
- c Faculty of Psychology , University of Warsaw , Warsaw , Poland
| |
Collapse
|