• Reference Citation Analysis
  • v
  • v
  • Find an Article
Find an Article PDF (4696131)   Today's Articles (7762)
For: Wei Q, Lewis FL, Sun Q, Yan P, Song R. Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis. IEEE Trans Cybern 2017;47:1224-1237. [PMID: 27093714 DOI: 10.1109/tcyb.2016.2542923] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Number Cited by Other Article(s)
1
Zhao M, Wang D, Qiao J. Neural-network-based accelerated safe Q-learning for optimal control of discrete-time nonlinear systems with state constraints. Neural Netw 2025;186:107249. [PMID: 39955957 DOI: 10.1016/j.neunet.2025.107249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Revised: 01/11/2025] [Accepted: 02/02/2025] [Indexed: 02/18/2025]
2
Xiang Z, Li P, Zou W, Ahn CK. Data-Based Optimal Switching and Control With Admissibility Guaranteed Q-Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025;36:5963-5973. [PMID: 38837921 DOI: 10.1109/tnnls.2024.3405739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
3
Lieu UT, Yoshinaga N. Dynamic control of self-assembly of quasicrystalline structures through reinforcement learning. SOFT MATTER 2025;21:514-525. [PMID: 39744960 DOI: 10.1039/d4sm01038h] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2025]
4
Song S, Gong D, Zhu M, Zhao Y, Huang C. Data-Driven Optimal Tracking Control for Discrete-Time Nonlinear Systems With Unknown Dynamics Using Deterministic ADP. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025;36:1184-1198. [PMID: 37847626 DOI: 10.1109/tnnls.2023.3323142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
5
Shen Z, Dong T, Huang T. Asynchronous iterative Q-learning based tracking control for nonlinear discrete-time multi-agent systems. Neural Netw 2024;180:106667. [PMID: 39216294 DOI: 10.1016/j.neunet.2024.106667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 06/24/2024] [Accepted: 08/23/2024] [Indexed: 09/04/2024]
6
Xiong K, Zhao Q, Yuan L. Calibration Method for Relativistic Navigation System Using Parallel Q-Learning Extended Kalman Filter. SENSORS (BASEL, SWITZERLAND) 2024;24:6186. [PMID: 39409226 PMCID: PMC11478926 DOI: 10.3390/s24196186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 09/20/2024] [Accepted: 09/20/2024] [Indexed: 10/20/2024]
7
Yuan X, Wang Y, Liu J, Sun C. Action Mapping: A Reinforcement Learning Method for Constrained-Input Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023;34:7145-7157. [PMID: 35025751 DOI: 10.1109/tnnls.2021.3138924] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
8
Xiong K, Zhou P, Wei C. Autonomous Navigation of Unmanned Aircraft Using Space Target LOS Measurements and QLEKF. SENSORS (BASEL, SWITZERLAND) 2022;22:s22186992. [PMID: 36146339 PMCID: PMC9503636 DOI: 10.3390/s22186992] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/10/2022] [Accepted: 09/13/2022] [Indexed: 06/01/2023]
9
Liu C, Zhang H, Luo Y, Su H. Dual Heuristic Programming for Optimal Control of Continuous-Time Nonlinear Systems Using Single Echo State Network. IEEE TRANSACTIONS ON CYBERNETICS 2022;52:1701-1712. [PMID: 32396118 DOI: 10.1109/tcyb.2020.2984952] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
10
Optimal Reinforcement Learning-Based Control Algorithm for a Class of Nonlinear Macroeconomic Systems. MATHEMATICS 2022. [DOI: 10.3390/math10030499] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
11
Wei Q, Zhu L, Song R, Zhang P, Liu D, Xiao J. Model-Free Adaptive Optimal Control for Unknown Nonlinear Multiplayer Nonzero-Sum Game. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022;33:879-892. [PMID: 33108297 DOI: 10.1109/tnnls.2020.3030127] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
12
Jiang WC, Narayanan V, Li JS. Model Learning and Knowledge Sharing for Cooperative Multiagent Systems in Stochastic Environment. IEEE TRANSACTIONS ON CYBERNETICS 2021;51:5717-5727. [PMID: 31944970 PMCID: PMC7338261 DOI: 10.1109/tcyb.2019.2958912] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
13
Liu C, Zhang H, Sun S, Ren H. Online H control for continuous-time nonlinear large-scale systems via single echo state network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
14
Sun C, Li X, Sun Y. A Parallel Framework of Adaptive Dynamic Programming Algorithm With Off-Policy Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021;32:3578-3587. [PMID: 32833647 DOI: 10.1109/tnnls.2020.3015767] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
15
Wei Q, Liao Z, Yang Z, Li B, Liu D. Continuous-Time Time-Varying Policy Iteration. IEEE TRANSACTIONS ON CYBERNETICS 2020;50:4958-4971. [PMID: 31329153 DOI: 10.1109/tcyb.2019.2926631] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
16
Yu J, Su Y, Liao Y. The Path Planning of Mobile Robot by Neural Networks and Hierarchical Reinforcement Learning. Front Neurorobot 2020;14:63. [PMID: 33132890 PMCID: PMC7561669 DOI: 10.3389/fnbot.2020.00063] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Accepted: 08/05/2020] [Indexed: 11/16/2022]  Open
17
Zhang J, Yang J, Zhang Y, Bevan MA. Controlling colloidal crystals via morphing energy landscapes and reinforcement learning. SCIENCE ADVANCES 2020;6:6/48/eabd6716. [PMID: 33239301 PMCID: PMC7688337 DOI: 10.1126/sciadv.abd6716] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 10/02/2020] [Indexed: 05/23/2023]
18
Wei Q, Song R, Liao Z, Li B, Lewis FL. Discrete-Time Impulsive Adaptive Dynamic Programming. IEEE TRANSACTIONS ON CYBERNETICS 2020;50:4293-4306. [PMID: 30990209 DOI: 10.1109/tcyb.2019.2906694] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
19
Ding D, Wang Z, Han QL. Neural-Network-Based Consensus Control for Multiagent Systems With Input Constraints: The Event-Triggered Case. IEEE TRANSACTIONS ON CYBERNETICS 2020;50:3719-3730. [PMID: 31329155 DOI: 10.1109/tcyb.2019.2927471] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
20
Yang L, Sun Q, Ma D, Wei Q. Nash Q-learning based equilibrium transfer for integrated energy management game with We-Energy. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.01.109] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
21
Wang H, Zou Y, Liu PX, Zhao X, Bao J, Zhou Y. Neural-network-based tracking Control for a Class of time-delay nonlinear systems with unmodeled dynamics. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.091] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
22
A Confrontation Decision-Making Method with Deep Reinforcement Learning and Knowledge Transfer for Multi-Agent System. Symmetry (Basel) 2020. [DOI: 10.3390/sym12040631] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]  Open
23
Zhang Y, Zhao B, Liu D. Deterministic policy gradient adaptive dynamic programming for model-free optimal control. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.032] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
24
Treesatayapun C. Knowledge-based reinforcement learning controller with fuzzy-rule network: experimental validation. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04509-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
25
Neural-network-based learning algorithms for cooperative games of discrete-time multi-player systems with control constraints via adaptive dynamic programming. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.02.107] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
26
Lingam G, Rout RR, Somayajulu DVLN. Adaptive deep Q-learning model for detecting social bots and influential users in online social networks. APPL INTELL 2019. [DOI: 10.1007/s10489-019-01488-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
27
Nguyen T, Mukhopadhyay S, Babbar-Sebens M. Why the ‘selfish’ optimizing agents could solve the decentralized reinforcement learning problems. AI COMMUN 2019. [DOI: 10.3233/aic-180596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
28
Song R, Zhu L. Stable value iteration for two-player zero-sum game of discrete-time nonlinear systems based on adaptive dynamic programming. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
29
Optimal Design of Wireless Charging Electric Bus System Based on Reinforcement Learning. ENERGIES 2019. [DOI: 10.3390/en12071229] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
30
Path planning of a mobile robot in a free-space environment using Q-learning. PROGRESS IN ARTIFICIAL INTELLIGENCE 2018. [DOI: 10.1007/s13748-018-00168-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
31
A data-driven online ADP control method for nonlinear system based on policy iteration and nonlinear MIMO decoupling ADRC. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.04.024] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
32
Jiang H, Zhang H, Han J, Zhang K. Iterative adaptive dynamic programming methods with neural network implementation for multi-player zero-sum games. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.04.005] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
33
Nguyen T, Mukhopadhyay S. Two-phase selective decentralization to improve reinforcement learning systems with MDP. AI COMMUN 2018. [DOI: 10.3233/aic-180766] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
34
Wei Q, Liu D, Lin Q, Song R. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018;29:957-969. [PMID: 28141530 DOI: 10.1109/tnnls.2016.2638863] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
35
Wei Q, Li B, Song R. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018;29:1226-1238. [PMID: 28362617 DOI: 10.1109/tnnls.2017.2661865] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
36
Li J, Kiumarsi B, Chai T, Lewis FL, Fan J. Off-Policy Reinforcement Learning: Optimal Operational Control for Two-Time-Scale Industrial Processes. IEEE TRANSACTIONS ON CYBERNETICS 2017;47:4547-4558. [PMID: 29125464 DOI: 10.1109/tcyb.2017.2761841] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
37
Zhang H, Jiang H, Luo C, Xiao G. Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms. IEEE TRANSACTIONS ON CYBERNETICS 2017;47:3331-3340. [PMID: 28113535 DOI: 10.1109/tcyb.2016.2611613] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
38
Wei Q, Liu D, Lin Q, Song R. Discrete-Time Optimal Control via Local Policy Iteration Adaptive Dynamic Programming. IEEE TRANSACTIONS ON CYBERNETICS 2017;47:3367-3379. [PMID: 27448382 DOI: 10.1109/tcyb.2016.2586082] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
39
Optimization of electricity consumption in office buildings based on adaptive dynamic programming. Soft comput 2016. [DOI: 10.1007/s00500-016-2194-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
PrevPage 1 of 1 1Next
© 2004-2025 Baishideng Publishing Group Inc. All rights reserved. 7041 Koll Center Parkway, Suite 160, Pleasanton, CA 94566, USA