• Reference Citation Analysis
  • v
  • v
  • Find an Article
Find an Article PDF (4699713)   Today's Articles (1990)
For: Rizvi SAA, Lin Z. Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem. IEEE Trans Neural Netw Learn Syst 2019;30:1523-1536. [PMID: 30296242 DOI: 10.1109/tnnls.2018.2870075] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Number Cited by Other Article(s)
1
Shen Z, Dong T, Huang T. Asynchronous iterative Q-learning based tracking control for nonlinear discrete-time multi-agent systems. Neural Netw 2024;180:106667. [PMID: 39216294 DOI: 10.1016/j.neunet.2024.106667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 06/24/2024] [Accepted: 08/23/2024] [Indexed: 09/04/2024]
2
Wang J, Wang W, Liang X. Finite-horizon optimal secure tracking control under denial-of-service attacks. ISA TRANSACTIONS 2024;149:44-53. [PMID: 38692974 DOI: 10.1016/j.isatra.2024.04.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 04/23/2024] [Accepted: 04/23/2024] [Indexed: 05/03/2024]
3
Wang J, Wu J, Shen H, Cao J, Rutkowski L. Fuzzy H Control of Discrete-Time Nonlinear Markov Jump Systems via a Novel Hybrid Reinforcement Q-Learning Method. IEEE TRANSACTIONS ON CYBERNETICS 2023;53:7380-7391. [PMID: 36417712 DOI: 10.1109/tcyb.2022.3220537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
4
Perrusquía A, Guo W. Reward inference of discrete-time expert's controllers: A complementary learning approach. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.02.079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
5
Perrusquia A, Guo W. A Closed-Loop Output Error Approach for Physics-Informed Trajectory Inference Using Online Data. IEEE TRANSACTIONS ON CYBERNETICS 2023;53:1379-1391. [PMID: 36129867 DOI: 10.1109/tcyb.2022.3202864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
6
Wang D, Ren J, Ha M. Discounted linear Q-learning control with novel tracking cost and its stability. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.01.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
7
Zhang D, Ye Z, Feng G, Li H. Intelligent Event-Based Fuzzy Dynamic Positioning Control of Nonlinear Unmanned Marine Vehicles Under DoS Attack. IEEE TRANSACTIONS ON CYBERNETICS 2022;52:13486-13499. [PMID: 34860659 DOI: 10.1109/tcyb.2021.3128170] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
8
Rizvi SAA, Pertzborn AJ, Lin Z. Reinforcement Learning Based Optimal Tracking Control Under Unmeasurable Disturbances With Application to HVAC Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022;33:7523-7533. [PMID: 34129505 PMCID: PMC9703879 DOI: 10.1109/tnnls.2021.3085358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
9
Perrusquía A. Human-behavior learning: A new complementary learning perspective for optimal decision making controllers. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
10
Solution of the linear quadratic regulator problem of black box linear systems using reinforcement learning. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.03.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
11
Long M, Su H, Zeng Z. Output-Feedback Global Consensus of Discrete-Time Multiagent Systems Subject to Input Saturation via Q-Learning Method. IEEE TRANSACTIONS ON CYBERNETICS 2022;52:1661-1670. [PMID: 32396125 DOI: 10.1109/tcyb.2020.2987385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
12
Integral reinforcement learning-based optimal output feedback control for linear continuous-time systems with input delay. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.073] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
13
Luo B, Yang Y, Liu D. Policy Iteration Q-Learning for Data-Based Two-Player Zero-Sum Game of Linear Discrete-Time Systems. IEEE TRANSACTIONS ON CYBERNETICS 2021;51:3630-3640. [PMID: 32092032 DOI: 10.1109/tcyb.2020.2970969] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
14
Calafiore GC, Possieri C. Output Feedback Q-Learning for Linear-Quadratic Discrete-Time Finite-Horizon Control Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021;32:3274-3281. [PMID: 32745011 DOI: 10.1109/tnnls.2020.3010304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
15
Intelligent adaptive optimal control using incremental model-based global dual heuristic programming subject to partial observability. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107153] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
16
Adaptive output-feedback optimal control for continuous-time linear systems based on adaptive dynamic programming approach. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.070] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
PrevPage 1 of 1 1Next
© 2004-2025 Baishideng Publishing Group Inc. All rights reserved. 7041 Koll Center Parkway, Suite 160, Pleasanton, CA 94566, USA