1
|
Ahmad I, Zhu M, Li G, Javeed D, Kumar P, Chen S. A Secure and Interpretable AI for Smart Healthcare System: A Case Study on Epilepsy Diagnosis Using EEG Signals. IEEE J Biomed Health Inform 2024; 28:3236-3247. [PMID: 38507373 DOI: 10.1109/jbhi.2024.3366341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2024]
Abstract
The efficient patient-independent and interpretable framework for electroencephalogram (EEG) epileptic seizure detection (ESD) has informative challenges due to the complex pattern of EEG nature. Automated detection of ES is crucial, while Explainable Artificial Intelligence (XAI) is urgently needed to justify the model detection of epileptic seizures in clinical applications. Therefore, this study implements an XAI-based computer-aided ES detection system (XAI-CAESDs), comprising three major modules, including of feature engineering module, a seizure detection module, and an explainable decision-making process module in a smart healthcare system. To ensure the privacy and security of biomedical EEG data, the blockchain is employed. Initially, the Butterworth filter eliminates various artifacts, and the Dual-Tree Complex Wavelet Transform (DTCWT) decomposes EEG signals, extracting real and imaginary eigenvalue features using frequency domain (FD), time domain (TD) linear feature, and Fractal Dimension (FD) of non-linear features. The best features are selected by using Correlation Coefficients (CC) and Distance Correlation (DC). The selected features are fed into the Stacking Ensemble Classifiers (SEC) for EEG ES detection. Further, the Shapley Additive Explanations (SHAP) method of XAI is implemented to facilitate the interpretation of predictions made by the proposed approach, enabling medical experts to make accurate and understandable decisions. The proposed Stacking Ensemble Classifiers (SEC) in XAI-CAESDs have demonstrated 2% best average accuracy, recall, specificity, and F1-score using the University of California, Irvine, Bonn University, and Boston Children's Hospital-MIT EEG data sets. The proposed framework enhances decision-making and the diagnosis process using biomedical EEG signals and ensures data security in smart healthcare systems.
Collapse
|
2
|
Xia S, Zheng S, Wang G, Gao X, Wang B. Granular Ball Sampling for Noisy Label Classification or Imbalanced Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2144-2155. [PMID: 34460405 DOI: 10.1109/tnnls.2021.3105984] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article presents a general sampling method, called granular-ball sampling (GBS), for classification problems by introducing the idea of granular computing. The GBS method uses some adaptively generated hyperballs to cover the data space, and the points on the hyperballs constitute the sampled data. GBS is the first sampling method that not only reduces the data size but also improves the data quality in noisy label classification. In addition, because the GBS method can be used to exactly describe the boundary, it can obtain almost the same classification accuracy as the results on the original datasets, and it can obtain an obviously higher classification accuracy than random sampling. Therefore, for the data reduction classification task, GBS is a general method that is not especially restricted by any specific classifier or dataset. Moreover, the GBS can be effectively used as an undersampling method for imbalanced classification. It has a time complexity that is close to O( N ), so it can accelerate most classifiers. These advantages make GBS powerful for improving the performance of classifiers. All codes have been released in the open source GBS library at http://www.cquptshuyinxia.com/GBS.html.
Collapse
|
3
|
Lingaraj V, Kaliannan K, Rohini VA, Thevasigamani RK, Chinnasamy K, Durairaj VB, Periasamy K. Design of expert active knn classifier algorithm using flow stroop colour word test to assess flow state. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Flow state assessment is essential to understand the involvement of an individual in a particular task assigned. If there is no involvement in the task assigned then the individual in due course of time gets affected either by psychological or physiological illnesses. The National Crime Records Bureau (NCRB) statistics show that non-involvement in the task drive the individual to a depression state and subsequently attempt for suicide. Therefore, it is essential to determine the decrease in flow level at an earlier stage and take remedial steps to recover them. There are many invasive methods to determine the flow state, which is not preferred and the commonly used non-invasive method is the questionnaire and interview method, which is the subjective and retroactive method, and hence chance to fake the result is more. Hence, the main objective of our work is to design an efficient flow level measurement system that measures flow in an objective method and also determines real-time flow classification. The accuracy of classification is achieved by designing an Expert Active k-Nearest Neighbour (EAkNN) which can classify the individual flow state towards the task assigned into nine states using non-invasive physiological Electrocardiogram (ECG) signals. The ECG parameters are obtained during the performance of FSCWT. Thus this work is a combination of psychological theory, physiological signals and machine learning concepts. The classifier is designed with a modified voting rule instead of the default majority voting rule, in which the contribution probability of nearest points to new data is considered. The dataset is divided into two sets, training dataset 75%and testing dataset 25%. The classifier is trained and tested with the dataset and the classification efficiency is 95%.
Collapse
Affiliation(s)
- Vanitha Lingaraj
- Department of Electronics and Communication Engineering, Prathyusha Engineering College, Thiruvallur, Tamilnadu, India
| | - Kalaiselvi Kaliannan
- Department of Networking and Communications, School of Computing Faculty of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Tamilnadu, India
| | | | - Rajesh Kumar Thevasigamani
- Department of CSE, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamilnadu, India
| | - Karthikeyan Chinnasamy
- Department of CSE, Koneru Lakshmaiah Education Foundation, Deemed to be University, Vaddeswaram, Andhrapradesh, India
| | - Vijendra Babu Durairaj
- Department of Electronics and Communication Engineering, Aarupadai Veedu Instituteof Technology, Vinayaka Mission’s Research Foundation, Paiyanoor, Tamilnadu, India
| | - Keerthika Periasamy
- Department of CSE, Kongu Engineering College, Perundurai, Erode, Tamilnadu, India
| |
Collapse
|
4
|
Silva RA, Britto Jr ADS, Enembreck F, Sabourin R, Oliveira LES. CSBF: A static ensemble fusion method based on the centrality score of complex networks. Comput Intell 2020. [DOI: 10.1111/coin.12249] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Ronan Assumpção Silva
- Postgraduate Program in Informatics (PPGIA)Pontifical Catholic University of Parana (PUCPR) Parana Brazil
- Department of InformaticsFederal Institute of Parana (IFPR) Parana Brazil
| | - Alceu de Souza Britto Jr
- Postgraduate Program in Informatics (PPGIA)Pontifical Catholic University of Parana (PUCPR) Parana Brazil
- Department of InformaticsState University of Ponta Grossa (UEPG) Parana Brazil
| | - Fabricio Enembreck
- Postgraduate Program in Informatics (PPGIA)Pontifical Catholic University of Parana (PUCPR) Parana Brazil
| | - Robert Sabourin
- Laboratoire d'Imagerie, de Vision et d'Intelligence ArtificielleÉcole de Technologie Supérieure (ÉTS) Montreal Canada
| | | |
Collapse
|
5
|
Li W, Li M, Qiao J, Guo X. A feature clustering-based adaptive modular neural network for nonlinear system modeling. ISA TRANSACTIONS 2020; 100:185-197. [PMID: 31767196 DOI: 10.1016/j.isatra.2019.11.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 08/27/2019] [Accepted: 11/08/2019] [Indexed: 06/10/2023]
Abstract
To improve the performance of nonlinear system modeling, this study proposes a feature clustering-based adaptive modular neural network (FC-AMNN) by simulating information processing mechanism of human brains in the way that different information is processed by different modules in parallel. Firstly, features are clustered using an adaptive feature clustering algorithm, and the number of modules in FC-AMNN is determined by the number of feature clusters automatically. The features in each cluster are then allocated to the corresponding module in FC-AMNN. Then, a self-constructive RBF neural network based on Error Correction algorithm is adopted as the subnetwork to study the allocated features. All modules work in parallel and are finally integrated using a Bayesian method to obtain the output. To demonstrate the effectiveness of the proposed model, FC-AMNN is tested on several UCI benchmark problems as well as a practical problem in wastewater treatment process. The experimental results show that the FC-AMNN can achieve a better generalization performance and an accurate result for nonlinear system modeling compared with other modular neural networks.
Collapse
Affiliation(s)
- Wenjing Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China.
| | - Meng Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China
| | - Junfei Qiao
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China
| | - Xin Guo
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China
| |
Collapse
|
6
|
Liu Z, Pan Q, Dezert J, Han JW, He Y. Classifier Fusion With Contextual Reliability Evaluation. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:1605-1618. [PMID: 28613193 DOI: 10.1109/tcyb.2017.2710205] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.
Collapse
|
7
|
Hou J, Gao H, Xia Q, Qi N. Feature Combination and the kNN Framework in Object Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:1368-1378. [PMID: 26316223 DOI: 10.1109/tnnls.2015.2461552] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In object classification, feature combination can usually be used to combine the strength of multiple complementary features and produce better classification results than any single one. While multiple kernel learning (MKL) is a popular approach to feature combination in object classification, it does not always perform well in practical applications. On one hand, the optimization process in MKL usually involves a huge consumption of computation and memory space. On the other hand, in some cases, MKL is found to perform no better than the baseline combination methods. This observation motivates us to investigate the underlying mechanism of feature combination with average combination and weighted average combination. As a result, we empirically find that in average combination, it is better to use a sample of the most powerful features instead of all, whereas in one type of weighted average combination, the best classification accuracy comes from a nearly sparse combination. We integrate these observations into the k-nearest neighbors (kNNs) framework, based on which we further discuss some issues related to sparse solution and MKL. Finally, by making use of the kNN framework, we present a new weighted average combination method, which is shown to perform better than MKL in both accuracy and efficiency in experiments. We believe that the work in this paper is helpful in exploring the mechanism underlying feature combination.
Collapse
|
8
|
Ensemble classifier for epileptic seizure detection for imperfect EEG data. ScientificWorldJournal 2015; 2015:945689. [PMID: 25759863 PMCID: PMC4334942 DOI: 10.1155/2015/945689] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2014] [Revised: 11/24/2014] [Accepted: 12/26/2014] [Indexed: 11/18/2022] Open
Abstract
Brain status information is captured by physiological electroencephalogram (EEG) signals, which are extensively used to study different brain activities. This study investigates the use of a new ensemble classifier to detect an epileptic seizure from compressed and noisy EEG signals. This noise-aware signal combination (NSC) ensemble classifier combines four classification models based on their individual performance. The main objective of the proposed classifier is to enhance the classification accuracy in the presence of noisy and incomplete information while preserving a reasonable amount of complexity. The experimental results show the effectiveness of the NSC technique, which yields higher accuracies of 90% for noiseless data compared with 85%, 85.9%, and 89.5% in other experiments. The accuracy for the proposed method is 80% when SNR=1 dB, 84% when SNR=5 dB, and 88% when SNR=10 dB, while the compression ratio (CR) is 85.35% for all of the datasets mentioned.
Collapse
|
9
|
Paisitkriangkrai S, van den Hengel A. A scalable stagewise approach to large-margin multiclass loss-based boosting. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:1002-1013. [PMID: 24808045 DOI: 10.1109/tnnls.2013.2282369] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We present a scalable and effective classification model to train multiclass boosting for multiclass classification problems. A direct formulation of multiclass boosting had been introduced in the past in the sense that it directly maximized the multiclass margin. The major problem of that approach is its high computational complexity during training, which hampers its application to real-world problems. In this paper, we propose a scalable and simple stagewise multiclass boosting method which also directly maximizes the multiclass margin. Our approach offers the following advantages: 1) it is simple and computationally efficient to train. The approach can speed up the training time by more than two orders of magnitude without sacrificing the classification accuracy and 2) like traditional AdaBoost, it is less sensitive to the choice of parameters and empirically demonstrates excellent generalization performance. Experimental results on challenging multiclass machine learning and vision tasks demonstrate that the proposed approach substantially improves the convergence rate and accuracy of the final visual detector at no additional computational cost compared to existing multiclass boosting.
Collapse
|
10
|
|
11
|
Paisitkriangkrai S, Shen C, Shi Q, van den Hengel A. RandomBoost: simplified multiclass boosting through randomization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:764-779. [PMID: 24807953 DOI: 10.1109/tnnls.2013.2281214] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We propose a novel boosting approach to multiclass classification problems, in which multiple classes are distinguished by a set of random projection matrices in essence. The approach uses random projections to alleviate the proliferation of binary classifiers typically required to perform multiclass classification. The result is a multiclass classifier with a single vector-valued parameter, irrespective of the number of classes involved. Two variants of this approach are proposed. The first method randomly projects the original data into new spaces, while the second method randomly projects the outputs of learned weak classifiers. These methods are not only conceptually simple but also effective and easy to implement. A series of experiments on synthetic, machine learning, and visual recognition data sets demonstrate that our proposed methods could be compared favorably with existing multiclass boosting algorithms in terms of both the convergence rate and classification accuracy.
Collapse
|
12
|
RAHMAN ASHFAQUR, VERMA BRIJESH. CLUSTER BASED ENSEMBLE CLASSIFIER GENERATION BY JOINT OPTIMIZATION OF ACCURACY AND DIVERSITY. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2013. [DOI: 10.1142/s1469026813400038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper presents an algorithm to generate ensemble classifier by joint optimization of accuracy and diversity. It is expected that the base classifiers in an ensemble are accurate and diverse (i.e., complementary in terms of errors) among each other for the ensemble classifier to be more accurate. We adopt a multi-objective evolutionary algorithm (MOEA) for joint optimization of accuracy and diversity on our recently developed nonuniform layered cluster oriented ensemble classifier (NULCOEC). In NULCOEC, the data set is partitioned into a variable number of clusters at different layers. Base classifiers are then trained on the clusters at different layers. The performance of NULCOEC is a function of the vector of the number of layers and clusters. The research presented in this paper investigates the implication of applying MOEA to generate NULCOEC. Accuracy and diversity of the ensemble classifier is expressed as a function of layers and clusters. A MOEA then searches for the combination of layers and clusters to obtain the nondominated set of (accuracy, diversity). We have obtained the results of single objective optimization (i.e., optimizing either accuracy or diversity) and compared them with the results of MOEA on sixteen UCI data sets. The results show that the MOEA can improve the performance of ensemble classifier.
Collapse
Affiliation(s)
- ASHFAQUR RAHMAN
- CSIRO Computational Informatics, Hobart, Tasmania 7001, Australia
| | - BRIJESH VERMA
- Central Queensland University, Rockhampton, QLD 4702, Australia
| |
Collapse
|
13
|
Yang J, Zeng X, Zhong S, Wu S. Effective neural network ensemble approach for improving generalization performance. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:878-887. [PMID: 24808470 DOI: 10.1109/tnnls.2013.2246578] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper, with an aim at improving neural networks' generalization performance, proposes an effective neural network ensemble approach with two novel ideas. One is to apply neural networks' output sensitivity as a measure to evaluate neural networks' output diversity at the inputs near training samples so as to be able to select diverse individuals from a pool of well-trained neural networks; the other is to employ a learning mechanism to assign complementary weights for the combination of the selected individuals. Experimental results show that the proposed approach could construct a neural network ensemble with better generalization performance than that of each individual in the ensemble combining with all the other individuals, and than that of the ensembles with simply averaged weights.
Collapse
|