1
|
Qian G, Yu X, Mei J, Liu J, Wang S. A Class of Adaptive Filtering Algorithms Based on Improper Complex Correntropy. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.03.076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2023]
|
2
|
Na X, Han M, Ren W, Zhong K. Modified BBO-Based Multivariate Time-Series Prediction System With Feature Subset Selection and Model Parameter Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:2163-2173. [PMID: 32639932 DOI: 10.1109/tcyb.2020.2977375] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multivariate time-series prediction is a challenging research topic in the field of time-series analysis and modeling, and is continually under research. The echo state network (ESN), a type of efficient recurrent neural network, has been widely used in time-series prediction, but when using ESN, two crucial problems have to be confronted: 1) how to select the optimal subset of input features and 2) how to set the suitable parameters of the model. To solve this problem, the modified biogeography-based optimization ESN (MBBO-ESN) system is proposed for system modeling and multivariate time-series prediction, which can simultaneously achieve feature subset selection and model parameter optimization. The proposed MBBO algorithm is an improved evolutionary algorithm based on biogeography-based optimization (BBO), which utilizes an S -type population migration rate model, a covariance matrix migration strategy, and a Lévy distribution mutation strategy to enhance the rotation invariance and exploration ability. Furthermore, the MBBO algorithm cannot only optimize the key parameters of the ESN model but also uses a hybrid-metric feature selection method to remove the redundancies and distinguish the importance of the input features. Compared with the traditional methods, the proposed MBBO-ESN system can discover the relationship between the input features and the model parameters automatically and make the prediction more accurate. The experimental results on the benchmark and real-world datasets demonstrate that MBBO outperforms the other traditional evolutionary algorithms, and the MBBO-ESN system is more competitive in multivariate time-series prediction than other classic machine-learning models.
Collapse
|
3
|
Ma Q, Chen E, Lin Z, Yan J, Yu Z, Ng WWY. Convolutional Multitimescale Echo State Network. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:1613-1625. [PMID: 31217137 DOI: 10.1109/tcyb.2019.2919648] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
As efficient recurrent neural network (RNN) models, echo state networks (ESNs) have attracted widespread attention and been applied in many application domains in the last decade. Although they have achieved great success in modeling time series, a single ESN may have difficulty in capturing the multitimescale structures that naturally exist in temporal data. In this paper, we propose the convolutional multitimescale ESN (ConvMESN), which is a novel training-efficient model for capturing multitimescale structures and multiscale temporal dependencies of temporal data. In particular, a multitimescale memory encoder is constructed with a multireservoir structure, in which different reservoirs have recurrent connections with different skip lengths (or time spans). By collecting all past echo states in each reservoir, this multireservoir structure encodes the history of a time series as nonlinear multitimescale echo state representations (MESRs). Our visualization analysis verifies that the MESRs provide better discriminative features for time series. Finally, multiscale temporal dependencies of MESRs are learned by a convolutional layer. By leveraging the multitimescale reservoirs followed by a convolutional learner, the ConvMESN has not only efficient memory encoding ability for temporal data with multitimescale structures but also strong learning ability for complex temporal dependencies. Furthermore, the training-free reservoirs and the single convolutional layer provide high-computational efficiency for the ConvMESN to model complex temporal data. Extensive experiments on 18 multivariate time series (MTS) benchmark datasets and 3 skeleton-based action recognition datasets demonstrate that the ConvMESN captures multitimescale dynamics and outperforms existing methods.
Collapse
|
4
|
Optimizing Deep Belief Echo State Network with a Sensitivity Analysis Input Scaling Auto-Encoder algorithm. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2019.105257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
5
|
|
6
|
|
7
|
|
8
|
Optimizing simple deterministically constructed cycle reservoir network with a Redundant Unit Pruning Auto-Encoder algorithm. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.035] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
9
|
Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application. Neural Netw 2019; 115:50-64. [DOI: 10.1016/j.neunet.2019.02.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 01/15/2019] [Accepted: 02/28/2019] [Indexed: 11/16/2022]
|
10
|
Ma Q, Zhuang W, Shen L, Cottrell GW. Time series classification with Echo Memory Networks. Neural Netw 2019; 117:225-239. [PMID: 31176962 DOI: 10.1016/j.neunet.2019.05.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 05/03/2019] [Accepted: 05/09/2019] [Indexed: 11/16/2022]
Abstract
Echo state networks (ESNs) are randomly connected recurrent neural networks (RNNs) that can be used as a temporal kernel for modeling time series data, and have been successfully applied on time series prediction tasks. Recently, ESNs have been applied to time series classification (TSC) tasks. However, previous ESN-based classifiers involve either training the model by predicting the next item of a sequence, or predicting the class label at each time step. The former is essentially a predictive model adapted from time series prediction work, rather than a model designed specifically for the classification task. The latter approach only considers local patterns at each time step and then averages over the classifications. Hence, rather than selecting the most discriminating sections of the time series, this approach will incorporate non-discriminative information into the classification, reducing accuracy. In this paper, we propose a novel end-to-end framework called the Echo Memory Network (EMN) in which the time series dynamics and multi-scale discriminative features are efficiently learned from an unrolled echo memory using multi-scale convolution and max-over-time pooling. First, the time series data are projected into the high dimensional nonlinear space of the reservoir and the echo states are collected into the echo memory matrix, followed by a single multi-scale convolutional layer to extract multi-scale features from the echo memory matrix. Max-over-time pooling is used to maintain temporal invariance and select the most important local patterns. Finally, a fully-connected hidden layer feeds into a softmax layer for classification. This architecture is applied to both time series classification and human action recognition datasets. For the human action recognition datasets, we divide the action data into five different components of the human body, and propose two spatial information fusion strategies to integrate the spatial information over them. With one training-free recurrent layer and only one layer of convolution, the EMN is a very efficient end-to-end model, and ranks first in overall classification ability on 55 TSC benchmark datasets and four 3D skeleton-based human action recognition tasks.
Collapse
Affiliation(s)
- Qianli Ma
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| | - Wanqing Zhuang
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| | - Lifeng Shen
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| | - Garrison W Cottrell
- Department of Computer Science and Engineering, University of California, San Diego, USA.
| |
Collapse
|
11
|
Shi G, Zhao B, Li C, Wei Q, Liu D. An echo state network based approach to room classification of office buildings. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.12.033] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
12
|
|
13
|
Adaptive lasso echo state network based on modified Bayesian information criterion for nonlinear system modeling. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3420-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
14
|
|
15
|
|
16
|
Qiao J, Li F, Han H, Li W. Growing Echo-State Network With Multiple Subreservoirs. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:391-404. [PMID: 26800553 DOI: 10.1109/tnnls.2016.2514275] [Citation(s) in RCA: 56] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
An echo-state network (ESN) is an effective alternative to gradient methods for training recurrent neural network. However, it is difficult to determine the structure (mainly the reservoir) of the ESN to match with the given application. In this paper, a growing ESN (GESN) is proposed to design the size and topology of the reservoir automatically. First, the GESN makes use of the block matrix theory to add hidden units to the existing reservoir group by group, which leads to a GESN with multiple subreservoirs. Second, every subreservoir weight matrix in the GESN is created with a predefined singular value spectrum, which ensures the echo-sate property of the ESN without posterior scaling of the weights. Third, during the growth of the network, the output weights of the GESN are updated in an incremental way. Moreover, the convergence of the GESN is proved. Finally, the GESN is tested on some artificial and real-world time-series benchmarks. Simulation results show that the proposed GESN has better prediction performance and faster leaning speed than some ESNs with fixed sizes and topologies.
Collapse
|
17
|
|
18
|
Duan H, Wang X. Echo State Networks With Orthogonal Pigeon-Inspired Optimization for Image Restoration. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:2413-2425. [PMID: 26529785 DOI: 10.1109/tnnls.2015.2479117] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, a neurodynamic approach for image restoration is proposed. Image restoration is a process of estimating original images from blurred and/or noisy images. It can be considered as a mapping problem that can be solved by neural networks. Echo state network (ESN) is a recurrent neural network with a simplified training process, which is adopted to estimate the original images in this paper. The parameter selection is important to the performance of the ESN. Thus, the pigeon-inspired optimization (PIO) approach is employed in the training process of the ESN to obtain desired parameters. Moreover, the orthogonal design strategy is utilized in the initialization of PIO to improve the diversity of individuals. The proposed method is tested on several deteriorated images with different sorts and levels of blur and/or noise. Results obtained by the improved ESN are compared with those obtained by several state-of-the-art methods. It is verified experimentally that better image restorations can be obtained for different blurred and/or noisy instances with the proposed neurodynamic method. In addition, the performance of the orthogonal PIO algorithm is compared with that of several existing bioinspired optimization algorithms to confirm its superiority.
Collapse
|
19
|
Wang XZ, Wei Y, Stanimirović PS. Complex Neural Network Models for Time-Varying Drazin Inverse. Neural Comput 2016; 28:2790-2824. [PMID: 27391685 DOI: 10.1162/neco_a_00866] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Two complex Zhang neural network (ZNN) models for computing the Drazin inverse of arbitrary time-varying complex square matrix are presented. The design of these neural networks is based on corresponding matrix-valued error functions arising from the limit representations of the Drazin inverse. Two types of activation functions, appropriate for handling complex matrices, are exploited to develop each of these networks. Theoretical results of convergence analysis are presented to show the desirable properties of the proposed complex-valued ZNN models. Numerical results further demonstrate the effectiveness of the proposed models.
Collapse
Affiliation(s)
- Xue-Zhong Wang
- School of Mathematical Sciences, Fudan University, Shanghai, 200433, P.R.C.
| | - Yimin Wei
- School of Mathematical Sciences and Key Laboratory of Mathematics for Nonlinear Sciences, Fudan University, Shanghai, 200433, P.R.C.
| | | |
Collapse
|
20
|
Xu D, Zhang H, Mandic DP. Convergence analysis of an augmented algorithm for fully complex-valued neural networks. Neural Netw 2015; 69:44-50. [DOI: 10.1016/j.neunet.2015.05.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2014] [Revised: 02/28/2015] [Accepted: 05/17/2015] [Indexed: 10/23/2022]
|
21
|
Wang H, Yan X. Optimizing the echo state network with a binary particle swarm optimization algorithm. Knowl Based Syst 2015. [DOI: 10.1016/j.knosys.2015.06.003] [Citation(s) in RCA: 74] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
22
|
Xia Y, Jahanchahi C, Mandic DP. Quaternion-valued echo state networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:663-673. [PMID: 25794374 DOI: 10.1109/tnnls.2014.2320715] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Quaternion-valued echo state networks (QESNs) are introduced to cater for 3-D and 4-D processes, such as those observed in the context of renewable energy (3-D wind modeling) and human centered computing (3-D inertial body sensors). The introduction of QESNs is made possible by the recent emergence of quaternion nonlinear activation functions with local analytic properties, required by nonlinear gradient descent training algorithms. To make QENSs second-order optimal for the generality of quaternion signals (both circular and noncircular), we employ augmented quaternion statistics to introduce widely linear QESNs. To that end, the standard widely linear model is modified so as to suit the properties of dynamical reservoir, typically realized by recurrent neural networks. This allows for a full exploitation of second-order information in the data, contained both in the covariance and pseudocovariances, and a rigorous account of second-order noncircularity (improperness), and the corresponding power mismatch and coupling between the data components. Simulations in the prediction setting on both benchmark circular and noncircular signals and on noncircular real-world 3-D body motion data support the analysis.
Collapse
|
23
|
Adaptive identifier for uncertain complex-valued discrete-time nonlinear systems based on recurrent neural networks. Neural Process Lett 2015. [DOI: 10.1007/s11063-015-9407-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
24
|
Park C, Took CC, Mandic DP. Augmented Complex Common Spatial Patterns for Classification of Noncircular EEG From Motor Imagery Tasks. IEEE Trans Neural Syst Rehabil Eng 2014; 22:1-10. [DOI: 10.1109/tnsre.2013.2294903] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
25
|
Dini DH, Mandic DP. Class of widely linear complex Kalman filters. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:775-86. [PMID: 24806126 DOI: 10.1109/tnnls.2012.2189893] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Recently, a class of widely linear (augmented) complex-valued Kalman filters (KFs), that make use of augmented complex statistics, have been proposed for sequential state space estimation of the generality of complex signals. This was achieved in the context of neural network training, and has allowed for a unified treatment of both second-order circular and noncircular signals, that is, both those with rotation invariant and rotation-dependent distributions. In this paper, we revisit the augmented complex KF, augmented complex extended KF, and augmented complex unscented KF in a more general context, and analyze their performances for different degrees of noncircularity of input and the state and measurement noises. For rigor, a theoretical bound for the performance advantage of widely linear KFs over their strictly linear counterparts is provided. The analysis also addresses the duality with bivariate real-valued KFs, together with several issues of implementation. Simulations using both synthetic and real world proper and improper signals support the analysis.
Collapse
|
26
|
Hirose A, Yoshida S. Generalization characteristics of complex-valued feedforward neural networks in relation to signal coherence. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:541-551. [PMID: 24805038 DOI: 10.1109/tnnls.2012.2183613] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Applications of complex-valued neural networks (CVNNs) have expanded widely in recent years-in particular in radar and coherent imaging systems. In general, the most important merit of neural networks lies in their generalization ability. This paper compares the generalization characteristics of complex-valued and real-valued feedforward neural networks in terms of the coherence of the signals to be dealt with. We assume a task of function approximation such as interpolation of temporal signals. Simulation and real-world experiments demonstrate that CVNNs with amplitude-phase-type activation function show smaller generalization error than real-valued networks, such as bivariate and dual-univariate real-valued neural networks. Based on the results, we discuss how the generalization characteristics are influenced by the coherence of the signals depending on the degree of freedom in the learning and on the circularity in neural dynamics.
Collapse
|
27
|
Zhang B, Miller DJ, Wang Y. Nonlinear system modeling with random matrices: echo state networks revisited. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:175-182. [PMID: 24808467 PMCID: PMC4107715 DOI: 10.1109/tnnls.2011.2178562] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Echo state networks (ESNs) are a novel form of recurrent neural networks (RNNs) that provide an efficient and powerful computational model approximating nonlinear dynamical systems. A unique feature of an ESN is that a large number of neurons (the "reservoir") are used, whose synaptic connections are generated randomly, with only the connections from the reservoir to the output modified by learning. Why a large randomly generated fixed RNN gives such excellent performance in approximating nonlinear systems is still not well understood. In this brief, we apply random matrix theory to examine the properties of random reservoirs in ESNs under different topologies (sparse or fully connected) and connection weights (Bernoulli or Gaussian). We quantify the asymptotic gap between the scaling factor bounds for the necessary and sufficient conditions previously proposed for the echo state property. We then show that the state transition mapping is contractive with high probability when only the necessary condition is satisfied, which corroborates and thus analytically explains the observation that in practice one obtains echo states when the spectral radius of the reservoir weight matrix is smaller than 1.
Collapse
Affiliation(s)
- Bai Zhang
- Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Arlington, VA 22203 USA
| | - David J. Miller
- Department of Electrical Engineering, Pennsylvania State University, University Park, PA 16802 USA
| | - Yue Wang
- Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Arlington, VA 22203 USA
| |
Collapse
|
28
|
|