1
|
|
2
|
Chen Y, Cheng Q, Cheng Y, Yang H, Yu H. Applications of Recurrent Neural Networks in Environmental Factor Forecasting: A Review. Neural Comput 2018; 30:2855-2881. [PMID: 30216144 DOI: 10.1162/neco_a_01134] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Analysis and forecasting of sequential data, key problems in various domains of engineering and science, have attracted the attention of many researchers from different communities. When predicting the future probability of events using time series, recurrent neural networks (RNNs) are an effective tool that have the learning ability of feedforward neural networks and expand their expression ability using dynamic equations. Moreover, RNNs are able to model several computational structures. Researchers have developed various RNNs with different architectures and topologies. To summarize the work of RNNs in forecasting and provide guidelines for modeling and novel applications in future studies, this review focuses on applications of RNNs for time series forecasting in environmental factor forecasting. We present the structure, processing flow, and advantages of RNNs and analyze the applications of various RNNs in time series forecasting. In addition, we discuss limitations and challenges of applications based on RNNs and future research directions. Finally, we summarize applications of RNNs in forecasting.
Collapse
Affiliation(s)
- Yingyi Chen
- College of Information and Electrical Engineering, China Agricultural University, Beijing 10083, China; Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture Beijing 100125, China; and Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture, Beijing 100083, China
| | - Qianqian Cheng
- College of Information and Electrical Engineering, China Agricultural University, Beijing 10083, China; Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture Beijing 100125, China; and Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture, Beijing 100083, China
| | - Yanjun Cheng
- College of Information and Electrical Engineering, China Agricultural University, Beijing 10083, China; Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture Beijing 100125, China; and Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture, Beijing 100083, China
| | - Hao Yang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 10083, China; Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture Beijing 100125, China; and Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture, Beijing 100083, China
| | - Huihui Yu
- College of Information and Electrical Engineering, China Agricultural University, Beijing 10083, China; Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture Beijing 100125, China; and Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture, Beijing 100083, China
| |
Collapse
|
3
|
Song Q, Zhao X, Fan H, Wang D. Robust Recurrent Kernel Online Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:1068-1081. [PMID: 26890925 DOI: 10.1109/tnnls.2016.2518223] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We propose a robust recurrent kernel online learning (RRKOL) algorithm based on the celebrated real-time recurrent learning approach that exploits the kernel trick in a recurrent online training manner. The novel RRKOL algorithm guarantees weight convergence with regularized risk management through the use of adaptive recurrent hyperparameters for superior generalization performance. Based on a new concept of the structure update error with a variable parameter length, we are the first one to propose the detailed structure update error, such that the weight convergence and robust stability proof can be integrated with a kernel sparsification scheme based on a solid theoretical ground. The RRKOL algorithm automatically weighs the regularized term in the recurrent loss function, such that we not only minimize the estimation error but also improve the generalization performance through sparsification with simulation support.
Collapse
|
4
|
|
5
|
Qiao J, Li S, Li W. Mutual information based weight initialization method for sigmoidal feedforward neural networks. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.05.054] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
6
|
|
7
|
Fan H, Song Q. A linear recurrent kernel online learning algorithm with sparse updates. Neural Netw 2013; 50:142-53. [PMID: 24300551 DOI: 10.1016/j.neunet.2013.11.011] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2012] [Revised: 08/16/2013] [Accepted: 11/13/2013] [Indexed: 11/18/2022]
Abstract
In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy.
Collapse
Affiliation(s)
- Haijin Fan
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore.
| | - Qing Song
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
| |
Collapse
|
8
|
Chang LC, Chen PA, Chang FJ. Reinforced two-step-ahead weight adjustment technique for online training of recurrent neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:1269-1278. [PMID: 24807523 DOI: 10.1109/tnnls.2012.2200695] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A reliable forecast of future events possesses great value. The main purpose of this paper is to propose an innovative learning technique for reinforcing the accuracy of two-step-ahead (2SA) forecasts. The real-time recurrent learning (RTRL) algorithm for recurrent neural networks (RNNs) can effectively model the dynamics of complex processes and has been used successfully in one-step-ahead forecasts for various time series. A reinforced RTRL algorithm for 2SA forecasts using RNNs is proposed in this paper, and its performance is investigated by two famous benchmark time series and a streamflow during flood events in Taiwan. Results demonstrate that the proposed reinforced 2SA RTRL algorithm for RNNs can adequately forecast the benchmark (theoretical) time series, significantly improve the accuracy of flood forecasts, and effectively reduce time-lag effects.
Collapse
|