1
|
Chriskos P, Neophytou K, Frantzidis CA, Gallegos J, Afthinos A, Onyike CU, Hillis A, Bamidis PD, Tsapkini K. The use of low-density EEG for the classification of PPA and MCI. Front Hum Neurosci 2025; 19:1526554. [PMID: 39989721 PMCID: PMC11842309 DOI: 10.3389/fnhum.2025.1526554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2024] [Accepted: 01/20/2025] [Indexed: 02/25/2025] Open
Abstract
Objective Dissociating Primary Progressive Aphasia (PPA) from Mild Cognitive Impairment (MCI) is an important, yet challenging task. Given the need for low-cost and time-efficient classification, we used low-density electroencephalography (EEG) recordings to automatically classify PPA, MCI and healthy control (HC) individuals. To the best of our knowledge, this is the first attempt to classify individuals from these three populations at the same time. Methods We collected three-minute EEG recordings with an 8-channel system from eight MCI, fourteen PPA and eight HC individuals. Utilizing the Relative Wavelet Entropy method, we derived (i) functional connectivity, (ii) graph theory metrics and extracted (iii) various energy rhythms. Features from all three sources were used for classification. The k-Nearest Neighbor and Support Vector Machines classifiers were used. Results A 100% individual classification accuracy was achieved in the HC-MCI, HC-PPA, and MCI-PPA comparisons, and a 77.78% accuracy in the HC-MCI-PPA comparison. Conclusion We showed for the first time that successful automatic classification between HC, MCI and PPA is possible with short, low-density EEG recordings. Despite methodological limitations of the current study, these results have important implications for clinical practice since they show that fast, low-cost and accurate disease diagnosis of these disorders is possible. Future studies need to establish the generalizability of the current findings with larger sample sizes and the efficient use of this methodology in a clinical setting.
Collapse
Affiliation(s)
- Panteleimon Chriskos
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, United States
- Laboratory of Medical Physics and Digital Innovation, Faculty of Health Sciences, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Kyriaki Neophytou
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, United States
| | - Christos A. Frantzidis
- Laboratory of Medical Physics and Digital Innovation, Faculty of Health Sciences, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
- School of Engineering and Physical Sciences, College of Health and Science, University of Lincoln., Lincoln, United Kingdom
| | - Jessica Gallegos
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, United States
| | | | - Chiadi U. Onyike
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins School of Medicine, Baltimore, MD, United States
| | - Argye Hillis
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, United States
| | - Panagiotis D. Bamidis
- Laboratory of Medical Physics and Digital Innovation, Faculty of Health Sciences, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Kyrana Tsapkini
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, United States
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
2
|
Riccardi N, Nelakuditi S, den Ouden DB, Rorden C, Fridriksson J, Desai RH. Discourse- and lesion-based aphasia quotient estimation using machine learning. Neuroimage Clin 2024; 42:103602. [PMID: 38593534 PMCID: PMC11016805 DOI: 10.1016/j.nicl.2024.103602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 04/01/2024] [Accepted: 04/01/2024] [Indexed: 04/11/2024]
Abstract
Discourse is a fundamentally important aspect of communication, and discourse production provides a wealth of information about linguistic ability. Aphasia commonly affects, in multiple ways, the ability to produce discourse. Comprehensive aphasia assessments such as the Western Aphasia Battery-Revised (WAB-R) are time- and resource-intensive. We examined whether discourse measures can be used to estimate WAB-R Aphasia Quotient (AQ), and whether this can serve as an ecologically valid, less resource-intensive measure. We used features extracted from discourse tasks using three AphasiaBank prompts involving expositional (picture description), story narrative, and procedural discourse. These features were used to train a machine learning model to predict the WAB-R AQ. We also compared and supplemented the model with lesion location information from structural neuroimaging. We found that discourse-based models could estimate AQ well, and that they outperformed models based on lesion features. Addition of lesion features to the discourse features did not improve the performance of the discourse model substantially. Inspection of the most informative discourse features revealed that different prompt types taxed different aspects of language. These findings suggest that discourse can be used to estimate aphasia severity, and provide insight into the linguistic content elicited by different types of discourse prompts.
Collapse
Affiliation(s)
- Nicholas Riccardi
- Department of Communication Sciences and Disorders, University of South Carolina, United States.
| | | | - Dirk B den Ouden
- Department of Communication Sciences and Disorders, University of South Carolina, United States
| | - Chris Rorden
- Department of Psychology, University of South Carolina, United States
| | - Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, United States
| | - Rutvik H Desai
- Department of Psychology, University of South Carolina, United States
| |
Collapse
|
3
|
Metu J, Kotha V, Hillis AE. Evaluating Fluency in Aphasia: Fluency Scales, Trichotomous Judgements, or Machine Learning. APHASIOLOGY 2023; 38:168-180. [PMID: 38425350 PMCID: PMC10901507 DOI: 10.1080/02687038.2023.2171261] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 01/18/2023] [Indexed: 03/02/2024]
Abstract
Background Speech-language pathologists (SLPs) and other clinicians often use aphasia batteries, such as the Western Aphasia Battery-Revised (WAB-R), to evaluate both severity and classification of aphasia. However, the fluency scale on the WAB-R is not entirely objective and has been found to have less than ideal inter-rater reliability, due to variability in weighing the importance of one dimension (e.g. articulatory effort or grammaticality) over another. This limitation has implications for aphasia classification. The subjectivity might be mitigated through the implementation of machine learning to identify fluent and non-fluent speech. Aims We hypothesized that two models consisting of convolutional and recurrent neural networks can be used to identify fluent and non-fluent aphasia as judged by SLPs, with greater reliability than use of the WAB-R fluency scale. Methods & Procedures The training and testing dataset for the networks was collected from the public domain, and the validation dataset was collected from participants in post-stroke aphasia studies. We used Kappa scores to evaluate inter-rater reliability among SLPs, and between the networks and SLPs. Outcome and Results Using public domain samples, the model for detecting non-fluent aphasia achieved high accuracy on the training dataset after 10 epochs (i.e., when algorithm scans the entire dataset) and 81% testing accuracy using public domain samples. The model for detecting fluent speech had high training accuracy and 83% testing. Across samples, using the WAB-R fluency scale, there was poor to perfect agreement among SLPs on the precise WAB-R fluency score, but substantial agreement on non-fluent (score 0-4) versus fluent (score of 5-9). The agreement between the model and the SLPs was moderate for identifying non-fluent speech and substantial fpr identifying fluent speech. When SLPs were asked to identify each sample as fluent, non-fluent, or mixed (without using the fluency scale), the agreement between SLPs was almost perfect (Kappa 0.94). The agreement between the SLPs' trichotomous judgement and the models was fair for detecting non-fluent speech and substantial for detecting fluent speech. Conclusions Results indicate that neither the WAB-R fluency scale nor the machine learning algorithms were as useful (reliable and valid) as a simple trichotomous judgement of fluent, non-fluent, or mixed by SLPs. These results, together with data from the literature, indicate that it is time to re-consider use of the WAB-R fluency scale for classification of aphasia. It is also premature, at present, to rely on machine learning to rate spoken language fluency.
Collapse
Affiliation(s)
- Jeet Metu
- Rock Ridge High School, Johns Hopkins University School of Medicine, and Cognitive Science, Johns Hopkins University, Baltimore, MD 21287
| | - Vishal Kotha
- Thomas Jefferson High School for Science and Technology, Johns Hopkins University School of Medicine, and Cognitive Science, Johns Hopkins University, Baltimore, MD 21287
| | - Argye E. Hillis
- Departments of Neurology and Physical Medicine & Rehabilitation, Johns Hopkins University School of Medicine, and Cognitive Science, Johns Hopkins University, Baltimore, MD 21287
| |
Collapse
|
4
|
Nunes M, Teles AS, Farias D, Diniz C, Bastos VH, Teixeira S. A Telemedicine Platform for Aphasia: Protocol for a Development and Usability Study. JMIR Res Protoc 2022; 11:e40603. [PMID: 36422881 PMCID: PMC9732749 DOI: 10.2196/40603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 09/02/2022] [Accepted: 10/28/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Aphasia is a central disorder of comprehension and expression of language that cannot be attributed to a peripheral sensory deficit or a peripheral motor disorder. The diagnosis and treatment of aphasia are complex. Interventions that facilitate this process can lead to an increase in the number of assisted patients and greater precision in the therapeutic choice by the health professional. OBJECTIVE This paper describes a protocol for a study that aims to implement a computer-based solution (ie, a telemedicine platform) that uses deep learning to classify vocal data from participants with aphasia and to develop serious games to treat aphasia. Additionally, this study aims to evaluate the usability and user experience of the proposed solution. METHODS Our interactive and smart platform will be developed to provide an alternative option for professionals and their patients with aphasia. We will design 2 serious games for aphasia rehabilitation and a deep learning-driven computational solution to aid diagnosis. A pilot evaluation of usability and user experience will reveal user satisfaction with platform features. RESULTS Data collection began in June 2022 and is currently ongoing. Results of system development as well as usability should be published by mid-2023. CONCLUSIONS This research will contribute to the treatment and diagnosis of aphasia by developing a telemedicine platform based on a co-design process. Therefore, this research will provide an alternative method for health care to patients with aphasia. Additionally, it will guide further studies with the same purpose. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/40603.
Collapse
Affiliation(s)
- Monara Nunes
- Federal University of Piauí, Regeneração, Brazil
| | | | - Daniel Farias
- Federal University of Delta do Parnaíba, Parnaíba, Brazil
| | - Claudia Diniz
- Federal University of Delta do Parnaíba, Parnaíba, Brazil
| | | | | |
Collapse
|
5
|
Herath HMDPM, Weraniyagoda WASA, Rajapaksha RTM, Wijesekara PADSN, Sudheera KLK, Chong PHJ. Automatic Assessment of Aphasic Speech Sensed by Audio Sensors for Classification into Aphasia Severity Levels to Recommend Speech Therapies. SENSORS (BASEL, SWITZERLAND) 2022; 22:6966. [PMID: 36146316 PMCID: PMC9501827 DOI: 10.3390/s22186966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/01/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
Aphasia is a type of speech disorder that can cause speech defects in a person. Identifying the severity level of the aphasia patient is critical for the rehabilitation process. In this research, we identify ten aphasia severity levels motivated by specific speech therapies based on the presence or absence of identified characteristics in aphasic speech in order to give more specific treatment to the patient. In the aphasia severity level classification process, we experiment on different speech feature extraction techniques, lengths of input audio samples, and machine learning classifiers toward classification performance. Aphasic speech is required to be sensed by an audio sensor and then recorded and divided into audio frames and passed through an audio feature extractor before feeding into the machine learning classifier. According to the results, the mel frequency cepstral coefficient (MFCC) is the most suitable audio feature extraction method for the aphasic speech level classification process, as it outperformed the classification performance of all mel-spectrogram, chroma, and zero crossing rates by a large margin. Furthermore, the classification performance is higher when 20 s audio samples are used compared with 10 s chunks, even though the performance gap is narrow. Finally, the deep neural network approach resulted in the best classification performance, which was slightly better than both K-nearest neighbor (KNN) and random forest classifiers, and it was significantly better than decision tree algorithms. Therefore, the study shows that aphasia level classification can be completed with accuracy, precision, recall, and F1-score values of 0.99 using MFCC for 20 s audio samples using the deep neural network approach in order to recommend corresponding speech therapy for the identified level. A web application was developed for English-speaking aphasia patients to self-diagnose the severity level and engage in speech therapies.
Collapse
Affiliation(s)
| | | | | | | | | | - Peter Han Joo Chong
- Department of Electrical and Electronic Engineering, Auckland University of Technology, Auckland 1010, New Zealand
| |
Collapse
|
6
|
Szklanny K, Wichrowski M, Wieczorkowska A. Prototyping Mobile Storytelling Applications for People with Aphasia. SENSORS (BASEL, SWITZERLAND) 2021; 22:s22010014. [PMID: 35009557 PMCID: PMC8747090 DOI: 10.3390/s22010014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 12/06/2021] [Accepted: 12/15/2021] [Indexed: 05/03/2023]
Abstract
Aphasia is a partial or total loss of the ability to articulate ideas or comprehend spoken language, resulting from brain damage, in a person whose language skills were previously normal. Our goal was to find out how a storytelling app can help people with aphasia to communicate and share daily experiences. For this purpose, the Aphasia Create app was created for tablets, along with Aphastory for the Google Glass device. These applications facilitate social participation and enhance quality of life by using visual storytelling forms composed of photos, drawings, icons, etc., that can be saved and shared. We performed usability tests (supervised by a neuropsychologist) on six participants with aphasia who were able to communicate. Our work contributes (1) evidence that the functions implemented in the Aphasia Create tablet app suit the needs of target users, but older people are often not familiar with tactile devices, (2) reports that the Google Glass device may be problematic for persons with right-hand paresis, and (3) a characterization of the design guidelines for apps for aphasics. Both applications can be used to work with people with aphasia, and can be further developed. Aphasic centers, in which the apps were presented, expressed interest in using them to work with patients. The Aphasia Create app won the Enactus Poland National Competition in 2015.
Collapse
|