1
|
Ebato Y, Nobukawa S, Sakemi Y, Nishimura H, Kanamaru T, Sviridova N, Aihara K. Impact of time-history terms on reservoir dynamics and prediction accuracy in echo state networks. Sci Rep 2024; 14:8631. [PMID: 38622178 PMCID: PMC11018609 DOI: 10.1038/s41598-024-59143-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/08/2024] [Indexed: 04/17/2024] Open
Abstract
The echo state network (ESN) is an excellent machine learning model for processing time-series data. This model, utilising the response of a recurrent neural network, called a reservoir, to input signals, achieves high training efficiency. Introducing time-history terms into the neuron model of the reservoir is known to improve the time-series prediction performance of ESN, yet the reasons for this improvement have not been quantitatively explained in terms of reservoir dynamics characteristics. Therefore, we hypothesised that the performance enhancement brought about by time-history terms could be explained by delay capacity, a recently proposed metric for assessing the memory performance of reservoirs. To test this hypothesis, we conducted comparative experiments using ESN models with time-history terms, namely leaky integrator ESNs (LI-ESN) and chaotic echo state networks (ChESN). The results suggest that compared with ESNs without time-history terms, the reservoir dynamics of LI-ESN and ChESN can maintain diversity and stability while possessing higher delay capacity, leading to their superior performance. Explaining ESN performance through dynamical metrics are crucial for evaluating the numerous ESN architectures recently proposed from a general perspective and for the development of more sophisticated architectures, and this study contributes to such efforts.
Collapse
Affiliation(s)
- Yudai Ebato
- Graduate School of Information and Computer Science, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan.
| | - Sou Nobukawa
- Graduate School of Information and Computer Science, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan
- Department of Computer Science, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan
- Department of Preventive Intervention for Psychiatric Disorders, National Center of Neurology and Psychiatry, 4-1-1 Ogawa-Higashi, Kodaira, Tokyo, 187-8551, Japan
- Research Center for Mathematical Engineering, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan
| | - Yusuke Sakemi
- Research Center for Mathematical Engineering, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan
- International Research Center for Neurointelligence, The University of Tokyo Institutes for Advanced Study, The University of Tokyo, 7 choume 3-1 Hongou, Bunkyu ku, Tokyo, 113-8654, Japan
| | - Haruhiko Nishimura
- Faculty of Informatics, Yamato University, 2-5-1 Katanama chou, Suita, Osaka, 564-0082, Japan
| | - Takashi Kanamaru
- Department of Mechanical Science and Engineering, School of Advanced Engineering, Kogakuin University, 2665-1 Nakano chou, Hachioji, Tokyo, 192-0015, Japan
- International Research Center for Neurointelligence, The University of Tokyo Institutes for Advanced Study, The University of Tokyo, 7 choume 3-1 Hongou, Bunkyu ku, Tokyo, 113-8654, Japan
| | - Nina Sviridova
- Department of Intelligent Systems, Tokyo City University, 1 choume 28-1 Tamazutsumi, Setagaya, Tokyo, 158-8557, Japan
- International Research Center for Neurointelligence, The University of Tokyo Institutes for Advanced Study, The University of Tokyo, 7 choume 3-1 Hongou, Bunkyu ku, Tokyo, 113-8654, Japan
| | - Kazuyuki Aihara
- Research Center for Mathematical Engineering, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan
- International Research Center for Neurointelligence, The University of Tokyo Institutes for Advanced Study, The University of Tokyo, 7 choume 3-1 Hongou, Bunkyu ku, Tokyo, 113-8654, Japan
| |
Collapse
|
2
|
Hart JD. Attractor reconstruction with reservoir computers: The effect of the reservoir's conditional Lyapunov exponents on faithful attractor reconstruction. CHAOS (WOODBURY, N.Y.) 2024; 34:043123. [PMID: 38579149 DOI: 10.1063/5.0196257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 03/22/2024] [Indexed: 04/07/2024]
Abstract
Reservoir computing is a machine learning framework that has been shown to be able to replicate the chaotic attractor, including the fractal dimension and the entire Lyapunov spectrum, of the dynamical system on which it is trained. We quantitatively relate the generalized synchronization dynamics of a driven reservoir during the training stage to the performance of the trained reservoir computer at the attractor reconstruction task. We show that, in order to obtain successful attractor reconstruction and Lyapunov spectrum estimation, the maximal conditional Lyapunov exponent of the driven reservoir must be significantly more negative than the most negative Lyapunov exponent of the target system. We also find that the maximal conditional Lyapunov exponent of the reservoir depends strongly on the spectral radius of the reservoir adjacency matrix; therefore, for attractor reconstruction and Lyapunov spectrum estimation, small spectral radius reservoir computers perform better in general. Our arguments are supported by numerical examples on well-known chaotic systems.
Collapse
Affiliation(s)
- Joseph D Hart
- U.S. Naval Research Laboratory, Code 5675, Washington, DC 20375, USA
| |
Collapse
|
3
|
Harding S, Leishman Q, Lunceford W, Passey DJ, Pool T, Webb B. Global forecasts in reservoir computers. CHAOS (WOODBURY, N.Y.) 2024; 34:023136. [PMID: 38407397 DOI: 10.1063/5.0181694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 01/24/2024] [Indexed: 02/27/2024]
Abstract
A reservoir computer is a machine learning model that can be used to predict the future state(s) of time-dependent processes, e.g., dynamical systems. In practice, data in the form of an input-signal are fed into the reservoir. The trained reservoir is then used to predict the future state of this signal. We develop a new method for not only predicting the future dynamics of the input-signal but also the future dynamics starting at an arbitrary initial condition of a system. The systems we consider are the Lorenz, Rossler, and Thomas systems restricted to their attractors. This method, which creates a global forecast, still uses only a single input-signal to train the reservoir but breaks the signal into many smaller windowed signals. We examine how well this windowed method is able to forecast the dynamics of a system starting at an arbitrary point on a system's attractor and compare this to the standard method without windows. We find that the standard method has almost no ability to forecast anything but the original input-signal while the windowed method can capture the dynamics starting at most points on an attractor with significant accuracy.
Collapse
Affiliation(s)
- S Harding
- Mathematics Department, Brigham Young University, Provo, Utah 84602, USA
| | - Q Leishman
- Mathematics Department, Brigham Young University, Provo, Utah 84602, USA
| | - W Lunceford
- Mathematics Department, Brigham Young University, Provo, Utah 84602, USA
| | - D J Passey
- Mathematics Department, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - T Pool
- The Robotics Institute, Carnegie Mellon University, Pittsburg, Pennsylvania 15289, USA
| | - B Webb
- Mathematics Department, Brigham Young University, Provo, Utah 84602, USA
| |
Collapse
|
4
|
Wang X, Cichos F. Harnessing synthetic active particles for physical reservoir computing. Nat Commun 2024; 15:774. [PMID: 38287028 PMCID: PMC10825170 DOI: 10.1038/s41467-024-44856-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/08/2024] [Indexed: 01/31/2024] Open
Abstract
The processing of information is an indispensable property of living systems realized by networks of active processes with enormous complexity. They have inspired many variants of modern machine learning, one of them being reservoir computing, in which stimulating a network of nodes with fading memory enables computations and complex predictions. Reservoirs are implemented on computer hardware, but also on unconventional physical substrates such as mechanical oscillators, spins, or bacteria often summarized as physical reservoir computing. Here we demonstrate physical reservoir computing with a synthetic active microparticle system that self-organizes from an active and passive component into inherently noisy nonlinear dynamical units. The self-organization and dynamical response of the unit are the results of a delayed propulsion of the microswimmer to a passive target. A reservoir of such units with a self-coupling via the delayed response can perform predictive tasks despite the strong noise resulting from the Brownian motion of the microswimmers. To achieve efficient noise suppression, we introduce a special architecture that uses historical reservoir states for output. Our results pave the way for the study of information processing in synthetic self-organized active particle systems.
Collapse
Affiliation(s)
- Xiangzun Wang
- Peter Debye Institute for Soft Matter Physics, Leipzig University, 04103, Leipzig, Germany
- Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, 04105, Leipzig, Germany
| | - Frank Cichos
- Peter Debye Institute for Soft Matter Physics, Leipzig University, 04103, Leipzig, Germany.
| |
Collapse
|
5
|
Goldmann M, Fischer I, Mirasso CR, C Soriano M. Exploiting oscillatory dynamics of delay systems for reservoir computing. CHAOS (WOODBURY, N.Y.) 2023; 33:093139. [PMID: 37748487 DOI: 10.1063/5.0156494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 09/05/2023] [Indexed: 09/27/2023]
Abstract
Nonlinear dynamical systems exhibiting inherent memory can process temporal information by exploiting their responses to input drives. Reservoir computing is a prominent approach to leverage this ability for time-series forecasting. The computational capabilities of analog computing systems often depend on both the dynamical regime of the system and the input drive. Most studies have focused on systems exhibiting a stable fixed-point solution in the absence of input. Here, we go beyond that limitation, investigating the computational capabilities of a paradigmatic delay system in three different dynamical regimes. The system we chose has an Ikeda-type nonlinearity and exhibits fixed point, bistable, and limit-cycle dynamics in the absence of input. When driving the system, new input-driven dynamics emerge from the autonomous ones featuring characteristic properties. Here, we show that it is feasible to attain consistent responses across all three regimes, which is an essential prerequisite for the successful execution of the tasks. Furthermore, we demonstrate that we can exploit all three regimes in two time-series forecasting tasks, showcasing the versatility of this paradigmatic delay system in an analog computing context. In all tasks, the lowest prediction errors were obtained in the regime that exhibits limit-cycle dynamics in the undriven reservoir. To gain further insights, we analyzed the diverse time-distributed node responses generated in the three regimes of the undriven system. An increase in the effective dimensionality of the reservoir response is shown to affect the prediction error, as also fine-tuning of the distribution of nonlinear responses. Finally, we demonstrate that a trade-off between prediction accuracy and computational speed is possible in our continuous delay systems. Our results not only provide valuable insights into the computational capabilities of complex dynamical systems but also open a new perspective on enhancing the potential of analog computing systems implemented on various hardware platforms.
Collapse
Affiliation(s)
- Mirko Goldmann
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Campus Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain
| | - Ingo Fischer
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Campus Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain
| | - Claudio R Mirasso
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Campus Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain
| | - Miguel C Soriano
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Campus Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain
| |
Collapse
|
6
|
Deep echo state networks in data marketplaces. MACHINE LEARNING WITH APPLICATIONS 2023. [DOI: 10.1016/j.mlwa.2023.100456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023] Open
|
7
|
Carroll TL, Hart JD. Time shifts to reduce the size of reservoir computers. CHAOS (WOODBURY, N.Y.) 2022; 32:083122. [PMID: 36049918 DOI: 10.1063/5.0097850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 07/22/2022] [Indexed: 06/15/2023]
Abstract
A reservoir computer is a type of dynamical system arranged to do computation. Typically, a reservoir computer is constructed by connecting a large number of nonlinear nodes in a network that includes recurrent connections. In order to achieve accurate results, the reservoir usually contains hundreds to thousands of nodes. This high dimensionality makes it difficult to analyze the reservoir computer using tools from the dynamical systems theory. Additionally, the need to create and connect large numbers of nonlinear nodes makes it difficult to design and build analog reservoir computers that can be faster and consume less power than digital reservoir computers. We demonstrate here that a reservoir computer may be divided into two parts: a small set of nonlinear nodes (the reservoir) and a separate set of time-shifted reservoir output signals. The time-shifted output signals serve to increase the rank and memory of the reservoir computer, and the set of nonlinear nodes may create an embedding of the input dynamical system. We use this time-shifting technique to obtain excellent performance from an opto-electronic delay-based reservoir computer with only a small number of virtual nodes. Because only a few nonlinear nodes are required, construction of a reservoir computer becomes much easier, and delay-based reservoir computers can operate at much higher speeds.
Collapse
Affiliation(s)
| | - Joseph D Hart
- U.S. Naval Research Laboratory, Washington D.C. 20375, USA
| |
Collapse
|
8
|
Roy M, Senapati A, Poria S, Mishra A, Hens C. Role of assortativity in predicting burst synchronization using echo state network. Phys Rev E 2022; 105:064205. [PMID: 35854538 DOI: 10.1103/physreve.105.064205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 05/11/2022] [Indexed: 06/15/2023]
Abstract
In this study, we use a reservoir computing based echo state network (ESN) to predict the collective burst synchronization of neurons. Specifically, we investigate the ability of ESN in predicting the burst synchronization of an ensemble of Rulkov neurons placed on a scale-free network. We have shown that a limited number of nodal dynamics used as input in the machine can capture the real trend of burst synchronization in this network. Further, we investigate the proper selection of nodal inputs of degree-degree (positive and negative) correlated networks. We show that for a disassortative network, selection of different input nodes based on degree has no significant role in the machine's prediction. However, in the case of assortative network, training the machine with the information (i.e., time series) of low degree nodes gives better results in predicting the burst synchronization. The results are found to be consistent with the investigation carried out with a continuous time Hindmarsh-Rose neuron model. Furthermore, the role of hyperparameters like spectral radius and leaking parameter of ESN on the prediction process has been examined. Finally, we explain the underlying mechanism responsible for observing these differences in the prediction in a degree correlated network.
Collapse
Affiliation(s)
- Mousumi Roy
- Department of Applied Mathematics, University of Calcutta, 92, A.P.C. Road, Kolkata 700009, India
| | - Abhishek Senapati
- Center for Advanced Systems Understanding (CASUS), 02826 Görlitz, Germany
| | - Swarup Poria
- Department of Applied Mathematics, University of Calcutta, 92, A.P.C. Road, Kolkata 700009, India
| | - Arindam Mishra
- Division of Dynamics, Lodz University of Technology, Stefanowskiego 1/15, 90924 Lodz, Poland
| | - Chittaranjan Hens
- Physics and Applied Mathematics Unit, Indian Statistical Institute, Kolkata 700108, India
| |
Collapse
|
9
|
Jungling T, Lymburn T, Small M. Consistency Hierarchy of Reservoir Computers. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2586-2595. [PMID: 34695007 DOI: 10.1109/tnnls.2021.3119548] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We study the propagation and distribution of information-carrying signals injected in dynamical systems serving as reservoir computers. Through different combinations of repeated input signals, a multivariate correlation analysis reveals measures known as the consistency spectrum and consistency capacity. These are high-dimensional portraits of the nonlinear functional dependence between input and reservoir state. For multiple inputs, a hierarchy of capacities characterizes the interference of signals from each source. For an individual input, the time-resolved capacities form a profile of the reservoir's nonlinear fading memory. We illustrate this methodology for a range of echo state networks.
Collapse
|
10
|
Thorne B, Jüngling T, Small M, Corrêa D, Zaitouny A. Reservoir time series analysis: Using the response of complex dynamical systems as a universal indicator of change. CHAOS (WOODBURY, N.Y.) 2022; 32:033109. [PMID: 35364819 DOI: 10.1063/5.0082122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 02/10/2022] [Indexed: 06/14/2023]
Abstract
We present the idea of reservoir time series analysis (RTSA), a method by which the state space representation generated by a reservoir computing (RC) model can be used for time series analysis. We discuss the motivation for this with reference to the characteristics of RC and present three ad hoc methods for generating representative features from the reservoir state space. We then develop and implement a hypothesis test to assess the capacity of these features to distinguish signals from systems with varying parameters. In comparison to a number of benchmark approaches (statistical, Fourier, phase space, and recurrence analysis), we are able to show significant, generalized accuracy across the proposed RTSA features that surpasses the benchmark methods. Finally, we briefly present an application for bearing fault distinction to motivate the use of RTSA in application.
Collapse
Affiliation(s)
- Braden Thorne
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Thomas Jüngling
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Michael Small
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Débora Corrêa
- ARC Centre for Transforming Maintenance Through Data Science, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Ayham Zaitouny
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| |
Collapse
|
11
|
Carroll TL. Optimizing memory in reservoir computers. CHAOS (WOODBURY, N.Y.) 2022; 32:023123. [PMID: 35232031 DOI: 10.1063/5.0078151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 01/31/2022] [Indexed: 06/14/2023]
Abstract
A reservoir computer is a way of using a high dimensional dynamical system for computation. One way to construct a reservoir computer is by connecting a set of nonlinear nodes into a network. Because the network creates feedback between nodes, the reservoir computer has memory. If the reservoir computer is to respond to an input signal in a consistent way (a necessary condition for computation), the memory must be fading; that is, the influence of the initial conditions fades over time. How long this memory lasts is important for determining how well the reservoir computer can solve a particular problem. In this paper, I describe ways to vary the length of the fading memory in reservoir computers. Tuning the memory can be important to achieve optimal results in some problems; too much or too little memory degrades the accuracy of the computation.
Collapse
Affiliation(s)
- T L Carroll
- US Naval Research Lab, Washington DC 20375, USA
| |
Collapse
|
12
|
Morales GB, Mirasso CR, Soriano MC. Unveiling the role of plasticity rules in reservoir computing. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.05.127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
13
|
Verzelli P, Alippi C, Livi L. Learn to synchronize, synchronize to learn. CHAOS (WOODBURY, N.Y.) 2021; 31:083119. [PMID: 34470256 DOI: 10.1063/5.0056425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 07/27/2021] [Indexed: 06/13/2023]
Abstract
In recent years, the artificial intelligence community has seen a continuous interest in research aimed at investigating dynamical aspects of both training procedures and machine learning models. Of particular interest among recurrent neural networks, we have the Reservoir Computing (RC) paradigm characterized by conceptual simplicity and a fast training scheme. Yet, the guiding principles under which RC operates are only partially understood. In this work, we analyze the role played by Generalized Synchronization (GS) when training a RC to solve a generic task. In particular, we show how GS allows the reservoir to correctly encode the system generating the input signal into its dynamics. We also discuss necessary and sufficient conditions for the learning to be feasible in this approach. Moreover, we explore the role that ergodicity plays in this process, showing how its presence allows the learning outcome to apply to multiple input trajectories. Finally, we show that satisfaction of the GS can be measured by means of the mutual false nearest neighbors index, which makes effective to practitioners theoretical derivations.
Collapse
Affiliation(s)
- Pietro Verzelli
- Faculty of Informatics, Università della Svizzera Italiana, Lugano 69000, Switzerland
| | - Cesare Alippi
- Faculty of Informatics, Università della Svizzera Italiana, Lugano 69000, Switzerland
| | - Lorenzo Livi
- Department of Computer Science and Mathematics, University of Manitoba, Winnipeg, Manitoba R3T 2N2, Canada
| |
Collapse
|
14
|
|
15
|
Han M, Li W, Feng S, Qiu T, Chen CLP. Maximum Information Exploitation Using Broad Learning System for Large-Scale Chaotic Time-Series Prediction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2320-2329. [PMID: 32697722 DOI: 10.1109/tnnls.2020.3004253] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
How to make full use of the evolution information of chaotic systems for time-series prediction is a difficult issue in dynamical system modeling. In this article, we propose a maximum information exploitation broad learning system (MIE-BLS) for extreme information utilization of large-scale chaotic time-series modeling. An improved leaky integrator dynamical reservoir is introduced in order to capture the linear information of chaotic systems effectively. It can not only capture the information of the current state but also achieve the compromise with historical states in the dynamical system. Furthermore, the feature is mapped to the enhancement layer by nonlinear random mapping to exploit nonlinear information. The cascading mechanism promotes the information propagation and achieves feature reactivation in dynamical modeling. Discussions about maximum information exploration and the comparisons with ResNet, DenseNet, and HighwayNet are presented in this article. Simulation results on four large-scale data sets illustrate that MIE-BLS could achieve better performance of information exploration in large-scale dynamical system modeling.
Collapse
|
16
|
Thorne B, Jüngling T, Small M, Hodkiewicz M. Parameter extraction with reservoir computing: Nonlinear time series analysis and application to industrial maintenance. CHAOS (WOODBURY, N.Y.) 2021; 31:033122. [PMID: 33810743 DOI: 10.1063/5.0039193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 02/18/2021] [Indexed: 06/12/2023]
Abstract
We study the task of determining parameters of dynamical systems from their time series using variations of reservoir computing. Averages of reservoir activations yield a static set of random features that allows us to separate different parameter values. We study such random feature models in the time and frequency domain. For the Lorenz and Rössler systems throughout stable and chaotic regimes, we achieve accurate and robust parameter extraction. For vibration data of centrifugal pumps, we find a significant ability to recover the operating regime. While the time domain models achieve higher performance for the numerical systems, the frequency domain models are superior in the application context.
Collapse
Affiliation(s)
- Braden Thorne
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Thomas Jüngling
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Michael Small
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Melinda Hodkiewicz
- ARC Centre for Transforming Maintenance Through Data Science, The University of Western Australia, Crawley, Western Australia 6009, Australia
| |
Collapse
|
17
|
Lymburn T, Algar SD, Small M, Jüngling T. Reservoir computing with swarms. CHAOS (WOODBURY, N.Y.) 2021; 31:033121. [PMID: 33810760 DOI: 10.1063/5.0039745] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 02/19/2021] [Indexed: 06/12/2023]
Abstract
We study swarms as dynamical systems for reservoir computing (RC). By example of a modified Reynolds boids model, the specific symmetries and dynamical properties of a swarm are explored with respect to a nonlinear time-series prediction task. Specifically, we seek to extract meaningful information about a predator-like driving signal from the swarm's response to that signal. We find that the naïve implementation of a swarm for computation is very inefficient, as permutation symmetry of the individual agents reduces the computational capacity. To circumvent this, we distinguish between the computational substrate of the swarm and a separate observation layer, in which the swarm's response is measured for use in the task. We demonstrate the implementation of a radial basis-localized observation layer for this task. The behavior of the swarm is characterized by order parameters and measures of consistency and related to the performance of the swarm as a reservoir. The relationship between RC performance and swarm behavior demonstrates that optimal computational properties are obtained near a phase transition regime.
Collapse
Affiliation(s)
- Thomas Lymburn
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Shannon D Algar
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Michael Small
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Thomas Jüngling
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| |
Collapse
|
18
|
Han X, Zhao Y, Small M. Revisiting the memory capacity in reservoir computing of directed acyclic network. CHAOS (WOODBURY, N.Y.) 2021; 31:033106. [PMID: 33810761 DOI: 10.1063/5.0040251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 02/05/2021] [Indexed: 06/12/2023]
Abstract
Reservoir computing (RC) is an attractive area of research by virtue of its potential for hardware implementation and low training cost. An intriguing research direction in this field is to interpret the underlying dynamics of an RC model by analyzing its short-term memory property, which can be quantified by the global index: memory capacity (MC). In this paper, the global MC of the RC whose reservoir network is specified as a directed acyclic network (DAN) is examined, and first we give that its global MC is theoretically bounded by the length of the longest path of the reservoir DAN. Since the global MC is technically influenced by the model hyperparameters, the dependency of the MC on the hyperparameters of this RC is then explored in detail. In the further study, we employ the improved conventional network embedding method (i.e., struc2vec) to mine the underlying memory community in the reservoir DAN, which can be regarded as the cluster of reservoir nodes with the same memory profile. Experimental results demonstrate that such a memory community structure can provide a concrete interpretation of the global MC of this RC. Finally, the clustered RC is proposed by exploiting the detected memory community structure of DAN, where its prediction performance is verified to be enhanced with lower training cost compared with other RC models on several chaotic time series benchmarks.
Collapse
Affiliation(s)
- Xinyu Han
- Harbin Institute of Technology, Shenzhen, 518055 Guangdong, China
| | - Yi Zhao
- Harbin Institute of Technology, Shenzhen, 518055 Guangdong, China
| | - Michael Small
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia
| |
Collapse
|
19
|
Guo Y, Zhang H, Wang L, Fan H, Xiao J, Wang X. Transfer learning of chaotic systems. CHAOS (WOODBURY, N.Y.) 2021; 31:011104. [PMID: 33754764 DOI: 10.1063/5.0033870] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 12/24/2020] [Indexed: 06/12/2023]
Abstract
Can a neural network trained by the time series of system A be used to predict the evolution of system B? This problem, knowing as transfer learning in a broad sense, is of great importance in machine learning and data mining yet has not been addressed for chaotic systems. Here, we investigate transfer learning of chaotic systems from the perspective of synchronization-based state inference, in which a reservoir computer trained by chaotic system A is used to infer the unmeasured variables of chaotic system B, while A is different from B in either parameter or dynamics. It is found that if systems A and B are different in parameter, the reservoir computer can be well synchronized to system B. However, if systems A and B are different in dynamics, the reservoir computer fails to synchronize with system B in general. Knowledge transfer along a chain of coupled reservoir computers is also studied, and it is found that, although the reservoir computers are trained by different systems, the unmeasured variables of the driving system can be successfully inferred by the remote reservoir computer. Finally, by an experiment of chaotic pendulum, we demonstrate that the knowledge learned from the modeling system can be transferred and used to predict the evolution of the experimental system.
Collapse
Affiliation(s)
- Yali Guo
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an 710062, China
| | - Han Zhang
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an 710062, China
| | - Liang Wang
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an 710062, China
| | - Huawei Fan
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an 710062, China
| | - Jinghua Xiao
- School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Xingang Wang
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an 710062, China
| |
Collapse
|
20
|
Carroll TL. Path length statistics in reservoir computers. CHAOS (WOODBURY, N.Y.) 2020; 30:083130. [PMID: 32872832 DOI: 10.1063/5.0014643] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 08/03/2020] [Indexed: 06/11/2023]
Abstract
Because reservoir computers are high dimensional dynamical systems, designing a good reservoir computer is difficult. In many cases, the designer must search a large nonlinear parameter space, and each step of the search requires simulating the full reservoir computer. In this work, I show that a simple statistic based on the mean path length between nodes in the reservoir computer is correlated with better reservoir computer performance. The statistic predicts the diversity of signals produced by the reservoir computer, as measured by the covariance matrix of the reservoir computer. This statistic by itself is not sufficient to predict reservoir computer performance because not only must the reservoir computer produce a diverse set of signals, it must be well matched to the training signals. Nevertheless, this path length statistic allows the designer to eliminate some network configurations from consideration without having to actually simulate the reservoir computer, reducing the complexity of the design process.
Collapse
Affiliation(s)
- T L Carroll
- U.S. Naval Research Lab, Washington, DC 20375, USA
| |
Collapse
|
21
|
Stelzer F, Röhm A, Lüdge K, Yanchuk S. Performance boost of time-delay reservoir computing by non-resonant clock cycle. Neural Netw 2020; 124:158-169. [PMID: 32006747 DOI: 10.1016/j.neunet.2020.01.010] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 01/09/2020] [Accepted: 01/09/2020] [Indexed: 11/25/2022]
Abstract
The time-delay-based reservoir computing setup has seen tremendous success in both experiment and simulation. It allows for the construction of large neuromorphic computing systems with only few components. However, until now the interplay of the different timescales has not been investigated thoroughly. In this manuscript, we investigate the effects of a mismatch between the time-delay and the clock cycle for a general model. Typically, these two time scales are considered to be equal. Here we show that the case of equal or resonant time-delay and clock cycle could be actively detrimental and leads to an increase of the approximation error of the reservoir. In particular, we can show that non-resonant ratios of these time scales have maximal memory capacities. We achieve this by translating the periodically driven delay-dynamical system into an equivalent network. Networks that originate from a system with resonant delay-times and clock cycles fail to utilize all of their degrees of freedom, which causes the degradation of their performance.
Collapse
Affiliation(s)
- Florian Stelzer
- Institute of Mathematics, Technische Universität Berlin, D-10623, Germany; Department of Mathematics, Humboldt-Universität zu Berlin, D-12489, Germany.
| | - André Röhm
- Institute of Theoretical Physics, Technische Universität Berlin, D-10623, Germany; Instituto de Física Interdisciplinar y Sistemas Complejos, IFISC (CSIC-UIB), Campus Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain.
| | - Kathy Lüdge
- Institute of Theoretical Physics, Technische Universität Berlin, D-10623, Germany.
| | - Serhiy Yanchuk
- Institute of Mathematics, Technische Universität Berlin, D-10623, Germany.
| |
Collapse
|
22
|
Algar SD, Lymburn T, Stemler T, Small M, Jüngling T. Learned emergence in selfish collective motion. CHAOS (WOODBURY, N.Y.) 2019; 29:123101. [PMID: 31893659 DOI: 10.1063/1.5120776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Accepted: 11/11/2019] [Indexed: 06/10/2023]
Abstract
To understand the collective motion of many individuals, we often rely on agent-based models with rules that may be computationally complex and involved. For biologically inspired systems in particular, this raises questions about whether the imposed rules are necessarily an accurate reflection of what is being followed. The basic premise of updating one's state according to some underlying motivation is well suited to the realm of reservoir computing; however, entire swarms of individuals are yet to be tasked with learning movement in this framework. This work focuses on the specific case of many selfish individuals simultaneously optimizing their domains in a manner conducive to reducing their personal risk of predation. Using an echo state network and data generated from the agent-based model, we show that, with an appropriate representation of input and output states, this selfish movement can be learned. This suggests that a more sophisticated neural network, such as a brain, could also learn this behavior and provides an avenue to further the search for realistic movement rules in systems of autonomous individuals.
Collapse
Affiliation(s)
- Shannon D Algar
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Thomas Lymburn
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Thomas Stemler
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Michael Small
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Thomas Jüngling
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| |
Collapse
|
23
|
Lymburn T, Walker DM, Small M, Jüngling T. The reservoir's perspective on generalized synchronization. CHAOS (WOODBURY, N.Y.) 2019; 29:093133. [PMID: 31575144 DOI: 10.1063/1.5120733] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2019] [Accepted: 09/08/2019] [Indexed: 06/10/2023]
Abstract
We employ reservoir computing for a reconstruction task in coupled chaotic systems, across a range of dynamical relationships including generalized synchronization. For a drive-response setup, a temporal representation of the synchronized state is discussed as an alternative to the known instantaneous form. The reservoir has access to both representations through its fading memory property, each with advantages in different dynamical regimes. We also extract signatures of the maximal conditional Lyapunov exponent in the performance of variations of the reservoir topology. Moreover, the reservoir model reproduces different levels of consistency where there is no synchronization. In a bidirectional coupling setup, high reconstruction accuracy is achieved despite poor observability and independent of generalized synchronization.
Collapse
Affiliation(s)
- Thomas Lymburn
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - David M Walker
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Michael Small
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| | - Thomas Jüngling
- Complex Systems Group, Department of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009, Australia
| |
Collapse
|
24
|
Carroll TL, Pecora LM. Network structure effects in reservoir computers. CHAOS (WOODBURY, N.Y.) 2019; 29:083130. [PMID: 31472504 DOI: 10.1063/1.5097686] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Accepted: 08/06/2019] [Indexed: 06/10/2023]
Abstract
A reservoir computer is a complex nonlinear dynamical system that has been shown to be useful for solving certain problems, such as prediction of chaotic signals, speech recognition, or control of robotic systems. Typically, a reservoir computer is constructed by connecting a large number of nonlinear nodes in a network, driving the nodes with an input signal and using the node outputs to fit a training signal. In this work, we set up reservoirs where the edges (or connections) between all the network nodes are either +1 or 0 and proceed to alter the network structure by flipping some of these edges from +1 to -1. We use this simple network because it turns out to be easy to characterize; we may use the fraction of edges flipped as a measure of how much we have altered the network. In some cases, the network can be rearranged in a finite number of ways without changing its structure; these rearrangements are symmetries of the network, and the number of symmetries is also useful for characterizing the network. We find that changing the number of edges flipped in the network changes the rank of the covariance of a matrix consisting of the time series from the different nodes in the network and speculate that this rank is important for understanding the reservoir computer performance.
Collapse
Affiliation(s)
- T L Carroll
- US Naval Research Laboratory, Washington, DC 20375, USA
| | - L M Pecora
- US Naval Research Laboratory, Washington, DC 20375, USA
| |
Collapse
|