1
|
Zhao Z, Li Y, Peng Y, Camilleri K, Kong W. Multi-view graph fusion of self-weighted EEG feature representations for speech imagery decoding. J Neurosci Methods 2025; 418:110413. [PMID: 40058464 DOI: 10.1016/j.jneumeth.2025.110413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2025] [Revised: 02/18/2025] [Accepted: 02/27/2025] [Indexed: 03/15/2025]
Abstract
BACKGROUND Electroencephalogram (EEG)-based speech imagery is an emerging brain-computer interface paradigm, which enables the speech disabled to naturally and intuitively communicate with external devices or other people. Currently, speech imagery research decoding performance is limited. One of the reasons is that there is still no consensus on which domain features are more discriminative. NEW METHOD To adaptively capture the complementary information from different domain features, we treat each domain as a view and propose a multi-view graph fusion of self-weighted EEG feature representations (MVGSF) model by learning a consensus graph from multi-view EEG features, based on which the imagery intentions can be effectively decoded. Considering that different EEG features in each view have different discriminative abilities, the view-dependent feature importance exploration strategy is incorporated in MVGSF. RESULTS (1) MVGSF exhibits outstanding performance on two public speech imagery datasets (2) The learned consensus graph from multi-view features effectively characterizes the relationships of EEG samples in a progressive manner. (3) Some task-related insights are explored including the feature importance-based identification of critical EEG channels and frequency bands in speech imagery decoding. COMPARISON WITH EXISTING METHODS We compared MVGSF with single-view counterparts, other multi-view models, and state-of-the-art models. MVGSF achieved the highest accuracy, with average accuracies of 78.93% on the 2020IBCIC3 dataset and 53.85% on the KaraOne dataset. CONCLUSIONS MVGSF effectively integrates features from multiple domains to enhance decoding capabilities. Furthermore, through the learned feature importance, MVGSF has made certain contributions to identify the EEG spatial-frequency patterns in speech imagery decoding.
Collapse
Affiliation(s)
- Zhenye Zhao
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang Province, China
| | - Yibing Li
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang Province, China
| | - Yong Peng
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang Province, China; Zhejiang Key Laboratory of Brain-Machine Collaborative Intelligence, Hangzhou, 310018, Zhejiang Province, China.
| | - Kenneth Camilleri
- Centre for Biomedical Cybernetics, University of Malta, Misda, 2080, Malta
| | - Wanzeng Kong
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang Province, China; Zhejiang Key Laboratory of Brain-Machine Collaborative Intelligence, Hangzhou, 310018, Zhejiang Province, China
| |
Collapse
|
2
|
Alzahrani S, Banjar H, Mirza R. Systematic Review of EEG-Based Imagined Speech Classification Methods. SENSORS (BASEL, SWITZERLAND) 2024; 24:8168. [PMID: 39771903 PMCID: PMC11679664 DOI: 10.3390/s24248168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 12/18/2024] [Accepted: 12/19/2024] [Indexed: 01/11/2025]
Abstract
This systematic review examines EEG-based imagined speech classification, emphasizing directional words essential for development in the brain-computer interface (BCI). This study employed a structured methodology to analyze approaches using public datasets, ensuring systematic evaluation and validation of results. This review highlights the feature extraction techniques that are pivotal to classification performance. These include deep learning, adaptive optimization, and frequency-specific decomposition, which enhance accuracy and robustness. Classification methods were explored by comparing traditional machine learning with deep learning and emphasizing the role of brain lateralization in imagined speech for effective recognition and classification. This study discusses the challenges of generalizability and scalability in imagined speech recognition, focusing on subject-independent approaches and multiclass scalability. Performance benchmarking across various datasets and methodologies revealed varied classification accuracies, reflecting the complexity and variability of EEG signals. This review concludes that challenges remain despite progress, particularly in classifying directional words. Future research directions include improved signal processing techniques, advanced neural network architectures, and more personalized, adaptive BCI systems. This review is critical for future efforts to develop practical communication tools for individuals with speech and motor impairments using EEG-based BCIs.
Collapse
Affiliation(s)
- Salwa Alzahrani
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (H.B.); (R.M.)
| | | | | |
Collapse
|
3
|
Alharbi YF, Alotaibi YA. Decoding Imagined Speech from EEG Data: A Hybrid Deep Learning Approach to Capturing Spatial and Temporal Features. Life (Basel) 2024; 14:1501. [PMID: 39598300 PMCID: PMC11595501 DOI: 10.3390/life14111501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Revised: 11/13/2024] [Accepted: 11/14/2024] [Indexed: 11/29/2024] Open
Abstract
Neuroimaging is revolutionizing our ability to investigate the brain's structural and functional properties, enabling us to visualize brain activity during diverse mental processes and actions. One of the most widely used neuroimaging techniques is electroencephalography (EEG), which records electrical activity from the brain using electrodes positioned on the scalp. EEG signals capture both spatial (brain region) and temporal (time-based) data. While a high temporal resolution is achievable with EEG, spatial resolution is comparatively limited. Consequently, capturing both spatial and temporal information from EEG data to recognize mental activities remains challenging. In this paper, we represent spatial and temporal information obtained from EEG signals by transforming EEG data into sequential topographic brain maps. We then apply hybrid deep learning models to capture the spatiotemporal features of the EEG topographic images and classify imagined English words. The hybrid framework utilizes a sequential combination of three-dimensional convolutional neural networks (3DCNNs) and recurrent neural networks (RNNs). The experimental results reveal the effectiveness of the proposed approach, achieving an average accuracy of 77.8% in identifying imagined English speech.
Collapse
Affiliation(s)
- Yasser F. Alharbi
- Computer Engineering Department, King Saud University, Riyadh 11451, Saudi Arabia;
| | | |
Collapse
|
4
|
Juyal R, Muthusamy H, Kumar N, Tiwari A. Resting state EEG assisted imagined vowel phonemes recognition by native and non-native speakers using brain connectivity measures. Phys Eng Sci Med 2024; 47:939-954. [PMID: 38647635 DOI: 10.1007/s13246-024-01417-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 03/20/2024] [Indexed: 04/25/2024]
Abstract
Communication is challenging for disabled individuals, but with advancement of brain-computer interface (BCI) systems, alternative communication systems can be developed. Current BCI spellers, such as P300, SSVEP, and MI, have drawbacks like reliance on external stimuli or conversation irrelevant mental tasks. In contrast to these systems, Imagined speech based BCI systems rely on directly decoding the vowels/words user is thinking, making them more intuitive, user friendly and highly popular among Brain-Computer-Interface (BCI) researchers. However, more research needs to be conducted on how subject-specific characteristics such as mental state, age, handedness, nativeness and resting state activity affects the brain's output during imagined speech. In an overt speech, it is evident that native and non-native speakers' brains function differently. Therefore, this paper explores how nativeness to language affects EEG signals while imagining vowel phonemes, using brain-map analysis and scalogram and also investigates the inclusion of features extracted from resting state EEG with imagined state EEG. The Fourteen-channel EEG for Imagined Speech (FEIS) dataset was used to analyse the EEG signals recorded while imagining vowel phonemes for 16 subjects (nine native English and seven non-native Chinese). For the classification of vowel phonemes, different connectivity measures such as covariance, coherence, and Phase Synchronous Index-PSI were extracted and analysed using statistics based Multivariate Analysis of Variance (MANOVA) approach. Different fusion strategies (difference, concatenation, Common Spatial Pattern-CSP and Canonical Correlation Analysis-CCA) were carried out to incorporate resting state EEG connectivity measures with imagined state connectivity measures for enhancing the accuracy of imagined vowel phoneme recognition. Simulation results revealed that concatenating imagined state and rest state covariance and PSI features provided the maximum accuracy of 92.78% for native speakers and 94.07% for non-native speakers.
Collapse
Affiliation(s)
- Ruchi Juyal
- Department of Electronics Engineering, National Institute of Technology Uttarakhand, Srinagar Garhwal, 246174, Uttarakhand, India
| | - Hariharan Muthusamy
- Department of Electronics Engineering, National Institute of Technology Uttarakhand, Srinagar Garhwal, 246174, Uttarakhand, India.
| | - Niraj Kumar
- Department of Neurology, All India Institute of Medical Science Bibinagar(Hyderabad Metropolitan Region), Bibinagar, 508126, Telangana, India
| | - Ashutosh Tiwari
- Department of Neurology, All India Institute of Medical Science Rishikesh, Rishikesh, 249203, Uttarakhand, India
| |
Collapse
|
5
|
Pan H, Wang Y, Li Z, Chu X, Teng B, Gao H. A Complete Scheme for Multi-Character Classification Using EEG Signals From Speech Imagery. IEEE Trans Biomed Eng 2024; 71:2454-2462. [PMID: 38470574 DOI: 10.1109/tbme.2024.3376603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Some classification studies of brain-computer interface (BCI) based on speech imagery show potential for improving communication skills in patients with amyotrophic lateral sclerosis (ALS). However, current research on speech imagery is limited in scope and primarily focuses on vowels or a few selected words. In this paper, we propose a complete research scheme for multi-character classification based on EEG signals derived from speech imagery. Firstly, we record 31 speech imagery contents, including 26 alphabets and five commonly used punctuation marks, from seven subjects using a 32-channel electroencephalogram (EEG) device. Secondly, we introduce the wavelet scattering transform (WST), which shares a structural resemblance to Convolutional Neural Networks (CNNs), for feature extraction. The WST is a knowledge-driven technique that preserves high-frequency information and maintains the deformation stability of EEG signals. To reduce the dimensionality of wavelet scattering coefficient features, we employ Kernel Principal Component Analysis (KPCA). Finally, the reduced features are fed into an Extreme Gradient Boosting (XGBoost) classifier within a multi-classification framework. The XGBoost classifier is optimized through hyperparameter tuning using grid search and 10-fold cross-validation, resulting in an average accuracy of 78.73% for the multi-character classification task. We utilize t-Distributed Stochastic Neighbor Embedding (t-SNE) technology to visualize the low-dimensional representation of multi-character speech imagery. This visualization effectively enables us to observe the clustering of similar characters. The experimental results demonstrate the effectiveness of our proposed multi-character classification scheme. Furthermore, our classification categories and accuracy exceed those reported in existing research.
Collapse
|
6
|
Carvalho VR, Mendes EMAM, Fallah A, Sejnowski TJ, Comstock L, Lainscsek C. Decoding imagined speech with delay differential analysis. Front Hum Neurosci 2024; 18:1398065. [PMID: 38826617 PMCID: PMC11140152 DOI: 10.3389/fnhum.2024.1398065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 04/25/2024] [Indexed: 06/04/2024] Open
Abstract
Speech decoding from non-invasive EEG signals can achieve relatively high accuracy (70-80%) for strictly delimited classification tasks, but for more complex tasks non-invasive speech decoding typically yields a 20-50% classification accuracy. However, decoder generalization, or how well algorithms perform objectively across datasets, is complicated by the small size and heterogeneity of existing EEG datasets. Furthermore, the limited availability of open access code hampers a comparison between methods. This study explores the application of a novel non-linear method for signal processing, delay differential analysis (DDA), to speech decoding. We provide a systematic evaluation of its performance on two public imagined speech decoding datasets relative to all publicly available deep learning methods. The results support DDA as a compelling alternative or complementary approach to deep learning methods for speech decoding. DDA is a fast and efficient time-domain open-source method that fits data using only few strong features and does not require extensive preprocessing.
Collapse
Affiliation(s)
- Vinícius Rezende Carvalho
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Postgraduate Program in Electrical Engineering, Universidade Federal de Minas Gerais, Belo Horizonte, MG, Brazil
| | | | - Aria Fallah
- Department of Neurosurgery, University of California, Los Angeles, Los Angeles, CA, United States
| | - Terrence J. Sejnowski
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, United States
- Institute for Neural Computation University of California, San Diego, La Jolla, CA, United States
- Department of Neurobiology, University of California, San Diego, La Jolla, CA, United States
| | - Lindy Comstock
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, CA, United States
| | - Claudia Lainscsek
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, United States
- Institute for Neural Computation University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
7
|
Tang X, Qi Y, Zhang J, Liu K, Tian Y, Gao X. Enhancing EEG and sEMG Fusion Decoding Using a Multi-Scale Parallel Convolutional Network With Attention Mechanism. IEEE Trans Neural Syst Rehabil Eng 2024; 32:212-222. [PMID: 38147424 DOI: 10.1109/tnsre.2023.3347579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
Electroencephalography (EEG) and surface electromyography (sEMG) have been widely used in the rehabilitation training of motor function. However, EEG signals have poor user adaptability and low classification accuracy in practical applications, and sEMG signals are susceptible to abnormalities such as muscle fatigue and weakness, resulting in reduced stability. To improve the accuracy and stability of interactive training recognition systems, we propose a novel approach called the Attention Mechanism-based Multi-Scale Parallel Convolutional Network (AM-PCNet) for recognizing and decoding fused EEG and sEMG signals. Firstly, we design an experimental scheme for the synchronous collection of EEG and sEMG signals and propose an ERP-WTC analysis method for channel screening of EEG signals. Then, the AM-PCNet network is designed to extract the time-domain, frequency-domain, and mixed-domain information of the EEG and sEMG fusion spectrogram images, and the attention mechanism is introduced to extract more fine-grained multi-scale feature information of the EEG and sEMG signals. Experiments on datasets obtained in the laboratory have shown that the average accuracy of EEG and sEMG fusion decoding is 96.62%. The accuracy is significantly improved compared with the classification performance of single-mode signals. When the muscle fatigue level reaches 50% and 90%, the accuracy is 92.84% and 85.29%, respectively. This study indicates that using this model to fuse EEG and sEMG signals can improve the accuracy and stability of hand rehabilitation training for patients.
Collapse
|
8
|
Tang X, Yang C, Sun X, Zou M, Wang H. Motor Imagery EEG Decoding Based on Multi-Scale Hybrid Networks and Feature Enhancement. IEEE Trans Neural Syst Rehabil Eng 2023; 31:1208-1218. [PMID: 37022411 DOI: 10.1109/tnsre.2023.3242280] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Motor Imagery (MI) based on Electroencephalography (EEG), a typical Brain-Computer Interface (BCI) paradigm, can communicate with external devices according to the brain's intentions. Convolutional Neural Networks (CNN) are gradually used for EEG classification tasks and have achieved satisfactory performance. However, most CNN-based methods employ a single convolution mode and a convolution kernel size, which cannot extract multi-scale advanced temporal and spatial features efficiently. What's more, they hinder the further improvement of the classification accuracy of MI-EEG signals. This paper proposes a novel Multi-Scale Hybrid Convolutional Neural Network (MSHCNN) for MI-EEG signal decoding to improve classification performance. The two-dimensional convolution is used to extract temporal and spatial features of EEG signals and the one-dimensional convolution is used to extract advanced temporal features of EEG signals. In addition, a channel coding method is proposed to improve the expression capacity of the spatiotemporal characteristics of EEG signals. We evaluate the performance of the proposed method on the dataset collected in the laboratory and BCI competition IV 2b, 2a, and the average accuracy is at 96.87%, 85.25%, and 84.86%, respectively. Compared with other advanced methods, our proposed method achieves higher classification accuracy. Then we use the proposed method for an online experiment and design an intelligent artificial limb control system. The proposed method effectively extracts EEG signals' advanced temporal and spatial features. Additionally, we design an online recognition system, which contributes to the further development of the BCI system.
Collapse
|
9
|
Shah U, Alzubaidi M, Mohsen F, Abd-Alrazaq A, Alam T, Househ M. The Role of Artificial Intelligence in Decoding Speech from EEG Signals: A Scoping Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:6975. [PMID: 36146323 PMCID: PMC9505262 DOI: 10.3390/s22186975] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 08/01/2022] [Accepted: 08/09/2022] [Indexed: 06/16/2023]
Abstract
Background: Brain traumas, mental disorders, and vocal abuse can result in permanent or temporary speech impairment, significantly impairing one's quality of life and occasionally resulting in social isolation. Brain-computer interfaces (BCI) can support people who have issues with their speech or who have been paralyzed to communicate with their surroundings via brain signals. Therefore, EEG signal-based BCI has received significant attention in the last two decades for multiple reasons: (i) clinical research has capitulated detailed knowledge of EEG signals, (ii) inexpensive EEG devices, and (iii) its application in medical and social fields. Objective: This study explores the existing literature and summarizes EEG data acquisition, feature extraction, and artificial intelligence (AI) techniques for decoding speech from brain signals. Method: We followed the PRISMA-ScR guidelines to conduct this scoping review. We searched six electronic databases: PubMed, IEEE Xplore, the ACM Digital Library, Scopus, arXiv, and Google Scholar. We carefully selected search terms based on target intervention (i.e., imagined speech and AI) and target data (EEG signals), and some of the search terms were derived from previous reviews. The study selection process was carried out in three phases: study identification, study selection, and data extraction. Two reviewers independently carried out study selection and data extraction. A narrative approach was adopted to synthesize the extracted data. Results: A total of 263 studies were evaluated; however, 34 met the eligibility criteria for inclusion in this review. We found 64-electrode EEG signal devices to be the most widely used in the included studies. The most common signal normalization and feature extractions in the included studies were the bandpass filter and wavelet-based feature extraction. We categorized the studies based on AI techniques, such as machine learning and deep learning. The most prominent ML algorithm was a support vector machine, and the DL algorithm was a convolutional neural network. Conclusions: EEG signal-based BCI is a viable technology that can enable people with severe or temporal voice impairment to communicate to the world directly from their brain. However, the development of BCI technology is still in its infancy.
Collapse
Affiliation(s)
- Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Mahmood Alzubaidi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Farida Mohsen
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha P.O. Box 34110, Qatar
| | - Tanvir Alam
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| |
Collapse
|
10
|
A Class-Incremental Learning Method Based on Preserving the Learned Feature Space for EEG-Based Emotion Recognition. MATHEMATICS 2022. [DOI: 10.3390/math10040598] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Deep learning-based models have shown to be one of the main active research topics in emotion recognition systems from Electroencephalogram (EEG) signals. However, a significant challenge is to effectively recognize new emotions that are incorporated sequentially, as current models must perform retraining from scratch. In this paper, we propose a Class-Incremental Learning (CIL) method, named Incremental Learning preserving the Learned Feature Space (IL2FS), in order to enable deep learning models to incorporate new emotions (classes) into the already known. IL2FS performs a weight aligning to correct the bias on new classes, while it incorporates margin ranking loss and triplet loss to preserve the inter-class separation and feature space alignment on known classes. We evaluated IL2FS over two public datasets (DREAMER and DEAP) for emotion recognition and compared it with other recent and popular CIL methods reported in computer vision. Experimental results show that IL2FS outperforms other CIL methods by obtaining an average accuracy of 59.08 ± 08.26% and 79.36 ± 04.68% on DREAMER and DEAP, recognizing data from new emotions that are incorporated sequentially.
Collapse
|
11
|
EEG-Based Multiword Imagined Speech Classification for Persian Words. BIOMED RESEARCH INTERNATIONAL 2022; 2022:8333084. [PMID: 35097127 PMCID: PMC8791746 DOI: 10.1155/2022/8333084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 10/27/2021] [Accepted: 11/28/2021] [Indexed: 11/17/2022]
Abstract
This study focuses on providing a simple, extensible, and multiclass classifier for imagined words using EEG signals. Six Persian words, along with the silence (or idle state), were selected as input classes. The words can be used to control a mouse/robot movement or fill a simple computer form. The data set of this study was 10 recordings of five participants collected in five sessions. Each record had 20 repetitions of all words and the silence. Feature sets consist of normalized, 1 Hz resolution frequency spectrum of 19 EEG channels in 1 to 32 Hz bands. Majority rule on a bank of binary SVM classifiers was used to determine the corresponding class of a feature set. Mean accuracy and confusion matrix of the classifiers were estimated by Monte-Carlo cross-validation. According to recording the time difference of inter- and intraclass samples, three classification modes were defined. In the long-time mode, where all instances of a word in the whole database are involved, average accuracies were about 58% for Word-Silence, 60% for Word-Word, 40% for Word-Word-Silence, and 32% for the seven-class classification (6 Words+Silence). For the short-time mode, when only instances of the same record are used, the accuracies were 96, 75, 79, and 55%, respectively. Finally, in the mixed-time classification, where samples of every class are taken from a different record, the highest performance achieved with average accuracies was about 97, 97, 92, and 62%. These results, even in the worst case of the long-time mode, are meaningfully better than random and are comparable with the best reported results of previously conducted studies in this area.
Collapse
|