1
|
Xu X, Liu J, Li E. Delayed self-feedback echo state network for long-term dynamics of hyperchaotic systems. Phys Rev E 2024; 109:064210. [PMID: 39020943 DOI: 10.1103/physreve.109.064210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 05/28/2024] [Indexed: 07/20/2024]
Abstract
Analyzing the long-term behavior of hyperchaotic systems poses formidable challenges in the field of nonlinear science. This paper proposes a data-driven model called the delayed self-feedback echo state network (self-ESN) specifically designed for the evolution behavior of hyperchaotic systems. Self-ESN incorporates a delayed self-feedback term into the dynamic equation of a reservoir to reflect the finite transmission speed of neuron signals. Delayed self-feedback establishes a connection between the current and previous m time steps of the reservoir state and provides an effective means to capture the dynamic characteristics of the system, thereby significantly improving memory performance. In addition, the concept of local echo state property (ESP) is introduced to relax the conventional ESP condition, and theoretical analysis is conducted on guiding the selection of feedback delay and gain to ensure the local ESP. The judicious selection of feedback gain and delay in self-ESN improves prediction accuracy and overcomes the challenges associated with obtaining optimal parameters of the reservoir in conventional ESN models. Numerical experiments are conducted to assess the long-term prediction capabilities of the self-ESN across various scenarios, including a 4D hyperchaotic system, a hyperchaotic network, and an infinite-dimensional delayed chaotic system. The experiments involve reconstructing bifurcation diagrams, predicting the chaotic synchronization, examining spatiotemporal evolution patterns, and uncovering the hidden attractors. The results underscore the capability of the proposed self-ESN as a strategy for long-term prediction and analysis of the complex systems.
Collapse
|
2
|
Chen A, Zhou X, Fan Y, Chen H. Underground Diagnosis Based on GPR and Learning in the Model Space. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:3832-3844. [PMID: 38153824 DOI: 10.1109/tpami.2023.3347739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2023]
Abstract
Ground Penetrating Radar (GPR) has been widely used in pipeline detection and underground diagnosis. In practical applications, the characteristics of the GPR data of the detected area and the likely underground anomalous structures could be rarely acknowledged before fully analyzing the obtained GPR data, causing challenges to identify the underground structures or anomalies automatically. In this article, a GPR B-scan image diagnosis method based on learning in the model space is proposed. The idea of learning in the model space is to use models fitted on parts of data as more stable and parsimonious representations of the data. For the GPR image, 2-Direction Echo State Network (2D-ESN) is proposed to fit the image segments through the next item prediction. By building the connections between the points on the image in both the horizontal and vertical directions, the 2D-ESN regards the GPR image segment as a whole and could effectively capture the dynamic characteristics of the GPR image. And then, semi-supervised and supervised learning methods could be further implemented on the 2D-ESN models for underground diagnosis. Experiments on real-world datasets are conducted, and the results demonstrate the effectiveness of the proposed model.
Collapse
|
3
|
Wang X, Jin Y, Du W, Wang J. Evolving Dual-Threshold Bienenstock-Cooper-Munro Learning Rules in Echo State Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:1572-1583. [PMID: 35763483 DOI: 10.1109/tnnls.2022.3184004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The strengthening and the weakening of synaptic strength in existing Bienenstock-Cooper-Munro (BCM) learning rule are determined by a long-term potentiation (LTP) sliding modification threshold and the afferent synaptic activities. However, synaptic long-term depression (LTD) even affects low-active synapses during the induction of synaptic plasticity, which may lead to information loss. Biological experiments have found another LTD threshold that can induce either potentiation or depression or no change, even at the activated synapses. In addition, existing BCM learning rules can only select a set of fixed rule parameters, which is biologically implausible and practically inflexible to learn the structural information of input signals. In this article, an evolved dual-threshold BCM learning rule is proposed to regulate the reservoir internal connection weights of the echo-state-network (ESN), which can contribute to alleviating information loss and enhancing learning performance by introducing different optimal LTD thresholds for different postsynaptic neurons. Our experimental results show that the evolved dual-threshold BCM learning rule can result in the synergistic learning of different plasticity rules, effectively improving the learning performance of an ESN in comparison with existing neural plasticity learning rules and some state-of-the-art ESN variants on three widely used benchmark tasks and the prediction of an esterification process.
Collapse
|
4
|
Jastrzebska A, Napoles G, Homenda W, Vanhoof K. Fuzzy Cognitive Map-Driven Comprehensive Time-Series Classification. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1348-1359. [PMID: 34936564 DOI: 10.1109/tcyb.2021.3133597] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article presents a comprehensive approach for time-series classification. The proposed model employs a fuzzy cognitive map (FCM) as a classification engine. Preprocessed input data feed the employed FCM. Map responses, after a postprocessing procedure, are used in the calculation of the final classification decision. The time-series data are staged using the moving-window technique to capture the time flow in the training procedure. We use a backward error propagation algorithm to compute the required model hyperparameters. Four model hyperparameters require tuning. Two are crucial for the model construction: 1) FCM size (number of concepts) and 2) window size (for the moving-window technique). Other two are important for training the model: 1) the number of epochs and 2) the learning rate (for training). Two distinguishing aspects of the proposed model are worth noting: 1) the separation of the classification engine from pre- and post-processing and 2) the time flow capture for data from concept space. The proposed classifier joins the key advantage of the FCM model, which is the interpretability of the model, with the superior classification performance attributed to the specially designed pre- and postprocessing stages. This article presents the experiments performed, demonstrating that the proposed model performs well against a wide range of state-of-the-art time-series classification algorithms.
Collapse
|
5
|
Multiscale Echo Self-Attention Memory Network for Multivariate Time Series Classification. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.11.066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
6
|
Broad fuzzy cognitive map systems for time series classification. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
7
|
Wang X, Jin Y, Hao K. Computational Modeling of Structural Synaptic Plasticity in Echo State Networks. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11254-11266. [PMID: 33760748 DOI: 10.1109/tcyb.2021.3060466] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Most existing studies on computational modeling of neural plasticity have focused on synaptic plasticity. However, regulation of the internal weights in the reservoir based on synaptic plasticity often results in unstable learning dynamics. In this article, a structural synaptic plasticity learning rule is proposed to train the weights and add or remove neurons within the reservoir, which is shown to be able to alleviate the instability of the synaptic plasticity, and to contribute to increase the memory capacity of the network as well. Our experimental results also reveal that a few stronger connections may last for a longer period of time in a constantly changing network structure, and are relatively resistant to decay or disruptions in the learning process. These results are consistent with the evidence observed in biological systems. Finally, we show that an echo state network (ESN) using the proposed structural plasticity rule outperforms an ESN using synaptic plasticity and three state-of-the-art ESNs on four benchmark tasks.
Collapse
|
8
|
An adaptive particle swarm optimization-based hybrid long short-term memory model for stock price time series forecasting. Soft comput 2022; 26:12115-12135. [PMID: 36043118 PMCID: PMC9415266 DOI: 10.1007/s00500-022-07451-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/07/2022] [Indexed: 12/05/2022]
Abstract
In this paper, we presented a long short-term memory (LSTM) network and adaptive particle swarm optimization (PSO)-based hybrid deep learning model for forecasting the stock price of three major stock indices such as Sensex, S&P 500, and Nifty 50 for short term and long term. Although the LSTM can handle uncertain, sequential, and nonlinear data, the biggest challenge in it is optimizing its weights and bias. The back-propagation through time algorithm has a drawback to overfit the data and being stuck in local minima. Thus, we proposed PSO-based hybrid deep learning model for evolving the initial weights of LSTM and fully connected layer (FCL). Furthermore, we introduced an adaptive approach for improving the inertia coefficient of PSO using the velocity of particles. The proposed method is an aggregation of adaptive PSO and Adam optimizer for training the LSTM. The adaptive PSO attempts to evolve the initial weights in different layers of the LSTM network and FCL. This research also compares the forecasting efficacy of the proposed method to the genetic algorithm (GA)-based hybrid LSTM model, the Elman neural network (ENN), and standard LSTM. Experimental findings demonstrate that the suggested model is successful in achieving the optimum initial weights and bias of the LSTM and FC layers, as well as superior forecasting accuracy.
Collapse
|
9
|
Shang R, Zhao K, Zhang W, Feng J, Li Y, Jiao L. Evolutionary multiobjective overlapping community detection based on similarity matrix and node correction. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
10
|
Ma Q, Chen Z, Tian S, Ng WWY. Difference-Guided Representation Learning Network for Multivariate Time-Series Classification. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:4717-4727. [PMID: 33270568 DOI: 10.1109/tcyb.2020.3034755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multivariate time series (MTSs) are widely found in many important application fields, for example, medicine, multimedia, manufacturing, action recognition, and speech recognition. The accurate classification of MTS has become an important research topic. Traditional MTS classification methods do not explicitly model the temporal difference information of time series, which is, in fact, important and reflects the dynamic evolution information. In this article, the difference-guided representation learning network (DGRL-Net) is proposed to guide the representation learning of time series by dynamic evolution information. The DGRL-Net consists of a difference-guided layer and a multiscale convolutional layer. First, in the difference-guided layer, we propose a difference gating LSTM to model the time dependency and dynamic evolution of the time series to obtain feature representations of both raw and difference series. Then, these two representations are used as two input channels of the multiscale convolutional layer to extract multiscale information. Extensive experiments demonstrate that the proposed model outperforms state-of-the-art methods on 18 MTS benchmark datasets and achieves competitive results on two skeleton-based action recognition datasets. Furthermore, the ablation study and visualized analysis are designed to verify the effectiveness of the proposed model.
Collapse
|
11
|
Chen Z, Liu Y, Zhu J, Zhang Y, Jin R, He X, Tao J, Chen L. Time-frequency deep metric learning for multivariate time series classification. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
12
|
Wu W, Zhang F, Wang C, Yuan C. Dynamical pattern recognition for sampling sequences based on deterministic learning and structural stability. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
13
|
Ma Q, Li S, Zhuang W, Li S, Wang J, Zeng D. Self-Supervised Time Series Clustering With Model-Based Dynamics. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3942-3955. [PMID: 32866103 DOI: 10.1109/tnnls.2020.3016291] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Time series clustering is usually an essential unsupervised task in cases when category information is not available and has a wide range of applications. However, existing time series clustering methods usually either ignore temporal dynamics of time series or isolate the feature extraction from clustering tasks without considering the interaction between them. In this article, a time series clustering framework named self-supervised time series clustering network (STCN) is proposed to optimize the feature extraction and clustering simultaneously. In the feature extraction module, a recurrent neural network (RNN) conducts a one-step time series prediction that acts as the reconstruction of the input data, capturing the temporal dynamics and maintaining the local structures of the time series. The parameters of the output layer of the RNN are regarded as model-based dynamic features and then fed into a self-supervised clustering module to obtain the predicted labels. To bridge the gap between these two modules, we employ spectral analysis to constrain the similar features to have the same pseudoclass labels and align the predicted labels with pseudolabels as well. STCN is trained by iteratively updating the model parameters and the pseudoclass labels. Experiments conducted on extensive time series data sets show that STCN has state-of-the-art performance, and the visualization analysis also demonstrates the effectiveness of the proposed model.
Collapse
|
14
|
Chen L, Chen D, Yang F, Sun J. A deep multi-task representation learning method for time series classification and retrieval. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.12.062] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
Bianchi FM, Scardapane S, Lokse S, Jenssen R. Reservoir Computing Approaches for Representation and Classification of Multivariate Time Series. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2169-2179. [PMID: 32598284 DOI: 10.1109/tnnls.2020.3001377] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Classification of multivariate time series (MTS) has been tackled with a large variety of methodologies and applied to a wide range of scenarios. Reservoir computing (RC) provides efficient tools to generate a vectorial, fixed-size representation of the MTS that can be further processed by standard classifiers. Despite their unrivaled training speed, MTS classifiers based on a standard RC architecture fail to achieve the same accuracy of fully trainable neural networks. In this article, we introduce the reservoir model space, an unsupervised approach based on RC to learn vectorial representations of MTS. Each MTS is encoded within the parameters of a linear model trained to predict a low-dimensional embedding of the reservoir dynamics. Compared with other RC methods, our model space yields better representations and attains comparable computational performance due to an intermediate dimensionality reduction procedure. As a second contribution, we propose a modular RC framework for MTS classification, with an associated open-source Python library. The framework provides different modules to seamlessly implement advanced RC architectures. The architectures are compared with other MTS classifiers, including deep learning models and time series kernels. Results obtained on the benchmark and real-world MTS data sets show that RC classifiers are dramatically faster and, when implemented using our proposed representation, also achieve superior classification accuracy.
Collapse
|
16
|
Zhou Y, Yen GG, Yi Z. A Knee-Guided Evolutionary Algorithm for Compressing Deep Neural Networks. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:1626-1638. [PMID: 31380778 DOI: 10.1109/tcyb.2019.2928174] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Deep neural networks (DNNs) have been regarded as fundamental tools for many disciplines. Meanwhile, they are known for their large-scale parameters, high redundancy in weights, and extensive computing resource consumptions, which pose a tremendous challenge to the deployment in real-time applications or on resource-constrained devices. To cope with this issue, compressing DNNs for accelerating its inference has drawn extensive interest recently. The basic idea is to prune parameters with little performance degradation. However, the overparameterized nature and the conflict between parameters reduction and performance maintenance make it prohibitive to manually search the pruning parameter space. In this paper, we formally establish filter pruning as a multiobjective optimization problem, and propose a knee-guided evolutionary algorithm (KGEA) that can automatically search for the solution with quality tradeoff between the scale of parameters and performance, in which both conflicting objectives can be optimized simultaneously. In particular, by incorporating a minimum Manhattan distance approach, the search effort in the proposed KGEA is explicitly guided toward the knee area, which greatly facilitates the manual search for a good tradeoff solution. Moreover, the parameter importance is directly estimated on the criterion of performance loss, which can robustly identify the redundancy. In addition to the knee solution, a performance-improved model can also be found in a fine-tuning-free fashion. The experiments on compressing fully convolutional LeNet and VGG-19 networks validate the superiority of the proposed algorithm over the state-of-the-art competing methods.
Collapse
|
17
|
Luo J, Ma H, Zhou D. A pareto ensemble based spectral clustering framework. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-020-00215-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
AbstractSimilarity matrix has a significant effect on the performance of the spectral clustering, and how to determine the neighborhood in the similarity matrix effectively is one of its main difficulties. In this paper, a “divide and conquer” strategy is proposed to model the similarity matrix construction task by adopting Multiobjective evolutionary algorithm (MOEA). The whole procedure is divided into two phases, phase I aims to determine the nonzero entries of the similarity matrix, and Phase II aims to determine the value of the nonzero entries of the similarity matrix. In phase I, the main contribution is that we model the task as a biobjective dynamic optimization problem, which optimizes the diversity and the similarity at the same time. It makes each individual determine one nonzero entry for each sample, and the encoding length decreases to O(N) in contrast with the non-ensemble multiobjective spectral clustering. In addition, a specific initialization operator and diversity preservation strategy are proposed during this phase. In phase II, three ensemble strategies are designed to determine the value of the nonzero value of the similarity matrix. Furthermore, this Pareto ensemble framework is extended to semi-supervised clustering by transforming the semi-supervised information to constraints. In contrast with the previous multiobjective evolutionary-based spectral clustering algorithms, the proposed Pareto ensemble-based framework makes a balance between time cost and the clustering accuracy, which is demonstrated in the experiments section.
Collapse
|
18
|
Confidence-based early classification of multivariate time series with multiple interpretable rules. Pattern Anal Appl 2019. [DOI: 10.1007/s10044-019-00782-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|