1
|
He X, Guo L, He D. Accelerated quadratic penalty dynamic approaches with applications to distributed optimization. Neural Netw 2025; 184:107032. [PMID: 39709646 DOI: 10.1016/j.neunet.2024.107032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 10/15/2024] [Accepted: 12/06/2024] [Indexed: 12/24/2024]
Abstract
In this paper, we explore accelerated continuous-time dynamic approaches with a vanishing damping α/t, driven by a quadratic penalty function designed for linearly constrained convex optimization problems. We replace these linear constraints with penalty terms incorporated into the objective function, where the penalty coefficient grows to +∞ as t tends to infinity. With appropriate penalty coefficients, we establish convergence rates of O(1/tmin{2α/3,2}) for the objective residual and the feasibility violation when α>0, and demonstrate the robustness of these convergence rates against external perturbation. Furthermore, we apply the proposed dynamic approach to three distributed optimization problems: a distributed constrained consensus problem, a distributed extended monotropic optimization, and a distributed optimization with separated equations, resulting in three variant distributed dynamic approaches. Numerical examples are provided to show the effectiveness of the proposed quadratic penalty dynamic approaches.
Collapse
Affiliation(s)
- Xin He
- School of Science, Xihua University, Chengdu, Sichuan, 610039, China.
| | - Luyao Guo
- School of Mathematics, Southeast University, Nanjing 210096, China.
| | - Dong He
- Pittsburgh Institute, Sichuan University, Chengdu, Sichuan, 610207, China.
| |
Collapse
|
2
|
Liu J, Liao X, Dong JS. A Recurrent Neural Network Approach for Constrained Distributed Fuzzy Convex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:9743-9757. [PMID: 37022084 DOI: 10.1109/tnnls.2023.3236607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article investigates a class of constrained distributed fuzzy convex optimization problems, where the objective function is the sum of a set of local fuzzy convex objective functions, and the constraints include partial order relation and closed convex set constraints. In undirected connected node communication network, each node only knows its own objective function and constraints, and the local objective function and partial order relation functions may be nonsmooth. To solve this problem, a recurrent neural network approach based on differential inclusion framework is proposed. The network model is constructed with the help of the idea of penalty function, and the estimation of penalty parameters in advance is eliminated. Through theoretical analysis, it is proven that the state solution of the network enters the feasible region in finite time and does not escape again, and finally reaches consensus at an optimal solution of the distributed fuzzy optimization problem. Furthermore, the stability and global convergence of the network do not depend on the selection of the initial state. A numerical example and an intelligent ship output power optimization problem are given to illustrate the feasibility and effectiveness of the proposed approach.
Collapse
|
3
|
Guo L, Korovin I, Gorbachev S, Shi X, Gorbacheva N, Cao J. Neurodynamic approaches for multi-agent distributed optimization. Neural Netw 2024; 169:673-684. [PMID: 37972511 DOI: 10.1016/j.neunet.2023.11.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 09/11/2023] [Accepted: 11/08/2023] [Indexed: 11/19/2023]
Abstract
This paper considers a class of multi-agent distributed convex optimization with a common set of constraints and provides several continuous-time neurodynamic approaches. In problem transformation, l1 and l2 penalty methods are used respectively to cast the linear consensus constraint into the objective function, which avoids introducing auxiliary variables and only involves information exchange among primal variables in the process of solving the problem. For nonsmooth cost functions, two differential inclusions with projection operator are proposed. Without convexity of the differential inclusions, the asymptotic behavior and convergence properties are explored. For smooth cost functions, by harnessing the smoothness of l2 penalty function, finite- and fixed-time convergent algorithms are provided via a specifically designed average consensus estimator. Finally, several numerical examples in the multi-agent simulation environment are conducted to illustrate the effectiveness of the proposed neurodynamic approaches.
Collapse
Affiliation(s)
- Luyao Guo
- School of Mathematics, Southeast University, Nanjing 210096, China.
| | - Iakov Korovin
- Scientific Research Institute of Multiprocessor Computer Systems, Southern Federal University, Taganrog, 347928, Russia.
| | | | - Xinli Shi
- School of Cyber Science & Engineering, Southeast University, Nanjing 210096, China.
| | - Nadezhda Gorbacheva
- Scientific Research Institute of Multiprocessor Computer Systems, Southern Federal University, Taganrog, 347928, Russia.
| | - Jinde Cao
- School of Mathematics, Southeast University, Nanjing 210096, China; Yonsei Frontier Lab, Yonsei University, Seoul, 03722, South Korea.
| |
Collapse
|
4
|
Wang X, Yang S, Guo Z, Huang T. A Second-Order Projected Primal-Dual Dynamical System for Distributed Optimization and Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:6568-6577. [PMID: 34818195 DOI: 10.1109/tnnls.2021.3127883] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article focuses on developing distributed optimization strategies for a class of machine learning problems over a directed network of computing agents. In these problems, the global objective function is an addition function, which is composed of local objective functions. Such local objective functions are convex and only endowed by the corresponding computing agent. A second-order Nesterov accelerated dynamical system with time-varying damping coefficient is developed to address such problems. To effectively deal with the constraints in the problems, the projected primal-dual method is carried out in the Nesterov accelerated system. By means of the cocoercive maximal monotone operator, it is shown that the trajectories of the Nesterov accelerated dynamical system can reach consensus at the optimal solution, provided that the damping coefficient and gains meet technical conditions. In the end, the validation of the theoretical results is demonstrated by the email classification problem and the logistic regression problem in machine learning.
Collapse
|
5
|
Zhou W, Wu J, Liu A, Zhang WA, Yu L. Neurodynamics-based distributed model predictive control of a low-speed two-stroke marine main engine power system. ISA TRANSACTIONS 2023; 138:341-358. [PMID: 36935259 DOI: 10.1016/j.isatra.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 01/27/2023] [Accepted: 03/04/2023] [Indexed: 06/16/2023]
Abstract
This article studies a steady operation optimization problem of a low-speed two-stroke marine main engine (LTMME) power system including a cooling water subsystem, a fuel oil subsystem and a main engine subsystem with input and state constraints. Firstly, a distributed model with coupling inputs and states is established for the LTMME power system according to laws of thermodynamics and kinetics. Further, an optimization problem of the LTMME power system is formulated to ensure the system to operate steadily, subjected to constraint conditions of the distributed model and the input and state bounds. Moreover, the optimization problem is rewritten as a quadratic programming problem, and an iterative distributed model predictive control (DMPC) scheme based on a primal-dual neural network (PDNN) method is used to obtain the optimal inputs within the constrained range. Finally, based on the actual data from an underway ocean vessel named Mingzhou 501 with an LTMME power system, a group of simulations are carried out to verify the effectiveness of the proposed approach.
Collapse
Affiliation(s)
- Wei Zhou
- Department of Automation, Zhejiang University of Technology, Hangzhou 310023, PR China; College of Marine, Zhejiang Institute of Communications, Hangzhou 311112, PR China
| | - Jinhui Wu
- Department of Automation, Zhejiang University of Technology, Hangzhou 310023, PR China
| | - Andong Liu
- Department of Automation, Zhejiang University of Technology, Hangzhou 310023, PR China.
| | - Wen-An Zhang
- Department of Automation, Zhejiang University of Technology, Hangzhou 310023, PR China
| | - Li Yu
- Department of Automation, Zhejiang University of Technology, Hangzhou 310023, PR China
| |
Collapse
|
6
|
Zhou W, Zhang HT, Wang J. Sparse Bayesian Learning Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13669-13683. [PMID: 34260368 DOI: 10.1109/tcyb.2021.3090204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Regression in a sparse Bayesian learning (SBL) framework is usually formulated as a global optimization problem with a nonconvex objective function and solved in a majorization-minimization framework where the solution quality and consistency depend heavily on the initial values of the used algorithm. In view of the shortcomings, this article presents an SBL algorithm based on collaborative neurodynamic optimization (CNO) for searching global optimal solutions to the global optimization problem. The CNO system consists of a population of recurrent neural networks (RNNs) where each RNN is convergent to a local optimum to the global optimization problem. Reinitialized repetitively via particle swarm optimization with exchanged local optima information, the RNNs iteratively improve their searching performance until reaching global convergence. The proposed CNO-based SBL algorithm is almost surely convergent to a global optimal solution to the formulated global optimization problem. Two applications with experimental results on sparse signal reconstruction and partial differential equation identification are elaborated to substantiate the superiority and efficacy of the proposed method in terms of solution optimality and consistency.
Collapse
|
7
|
Leung MF, Wang J, Li D. Decentralized Robust Portfolio Optimization Based on Cooperative-Competitive Multiagent Systems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12785-12794. [PMID: 34260366 DOI: 10.1109/tcyb.2021.3088884] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article addresses decentralized robust portfolio optimization based on multiagent systems. Decentralized robust portfolio optimization is first formulated as two distributed minimax optimization problems in a Markowitz return-risk framework. Cooperative-competitive multiagent systems are developed and applied for solving the formulated problems. The multiagent systems are shown to be able to reach consensuses in the expected stock prices and convergence in investment allocations through both intergroup and intragroup interactions. Experimental results of the multiagent systems with stock data from four major markets are elaborated to substantiate the efficacy of multiagent systems for decentralized robust portfolio optimization.
Collapse
|
8
|
A review of distributed optimization: Problems, models and algorithms. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.06.097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
9
|
Zhao Y, Liao X, He X. Novel projection neurodynamic approaches for constrained convex optimization. Neural Netw 2022; 150:336-349. [DOI: 10.1016/j.neunet.2022.03.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 01/02/2022] [Accepted: 03/07/2022] [Indexed: 11/30/2022]
|
10
|
Jiang X, Qin S, Xue X, Liu X. A second-order accelerated neurodynamic approach for distributed convex optimization. Neural Netw 2021; 146:161-173. [PMID: 34864224 DOI: 10.1016/j.neunet.2021.11.013] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 10/02/2021] [Accepted: 11/09/2021] [Indexed: 10/19/2022]
Abstract
Based on the theories of inertial systems, a second-order accelerated neurodynamic approach is designed to solve a distributed convex optimization with inequality and set constraints. Most of the existing approaches for distributed convex optimization problems are usually first-order ones, and it is usually hard to analyze the convergence rate for the state solution of those first-order approaches. Due to the control design for the acceleration, the second-order neurodynamic approaches can often achieve faster convergence rate. Moreover, the existing second-order approaches are mostly designed to solve unconstrained distributed convex optimization problems, and are not suitable for solving constrained distributed convex optimization problems. It is acquired that the state solution of the designed neurodynamic approach in this paper converges to the optimal solution of the considered distributed convex optimization problem. An error function which demonstrates the performance of the designed neurodynamic approach, has a superquadratic convergence. Several numerical examples are provided to show the effectiveness of the presented second-order accelerated neurodynamic approach.
Collapse
Affiliation(s)
- Xinrui Jiang
- Department of Mathematics, Harbin Institute of Technology, Weihai, 264209, China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, 264209, China.
| | - Xiaoping Xue
- Department of Mathematics, Harbin Institute of Technology, Harbin, 150001, China.
| | - Xinzhi Liu
- Department of Applied Mathematics, University of Waterloo, Waterloo, N2L3G1, Canada.
| |
Collapse
|
11
|
Mo L, Zhang Z, Yu Y. Distributed discrete-time optimization of heterogeneous multi-agent networks with unbounded position constraints and nonconvex velocity constraints. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.09.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
12
|
Liu B, Ding Z. Distributed Heuristic Adaptive Neural Networks With Variance Reduction in Switching Graphs. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3836-3844. [PMID: 31880575 DOI: 10.1109/tcyb.2019.2956291] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article proposes a distributed adaptive training method for neural networks in switching communication graphs to deal with the problems concerned with massive data or privacy-related data. First, the stochastic variance reduced gradient (SVRG) is used for the training of neural networks. Then, the authors propose a heuristic adaptive consensus algorithm for distributed training, which adaptively adjusts the weighted connectivity matrix based on the performance of each agent over the communication graph. Furthermore, it is proved that the proposed distributed heuristic adaptive neural networks ensure the convergence of all the agents to the optimum with a single communication among connected neighbors after every training step, which is also suitable for switching graphs. This theorem is verified by the simulation, which gives the results that fewer iterations are required for all agents to reach the optimum using the proposed heuristic adaptive consensus algorithm, and the SVRG can greatly decrease the fluctuations caused by the stochastic gradient and improve its performance with only a little extra computational cost.
Collapse
|
13
|
Lu Q, Liao X, Xiang T, Li H, Huang T. Privacy Masking Stochastic Subgradient-Push Algorithm for Distributed Online Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3224-3237. [PMID: 32149669 DOI: 10.1109/tcyb.2020.2973221] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article investigates the problem of distributed online optimization for a group of units communicating on time-varying unbalanced directed networks. The main target of the set of units is to cooperatively minimize the sum of all locally known convex cost functions (global cost function) while pursuing the privacy of their local cost functions being well masked. To address such optimization problems in a collaborative and distributed fashion, a differentially private-distributed stochastic subgradient-push algorithm, called DP-DSSP, is proposed, which ensures that units interact with in-neighbors and collectively optimize the global cost function. Unlike most of the existing distributed algorithms which do not consider privacy issues, DP-DSSP via differential privacy strategy successfully masks the privacy of participating units, which is more practical in applications involving sensitive messages, such as military affairs or medical treatment. An important feature of DP-DSSP is tackling distributed online optimization problems under the circumstance of time-varying unbalanced directed networks. Theoretical analysis indicates that DP-DSSP can effectively mask differential privacy as well as can achieve sublinear regrets. A compromise between the privacy levels and the accuracy of DP-DSSP is also revealed. Furthermore, DP-DSSP is capable of handling arbitrarily large but uniformly bounded delays in the communication links. Finally, simulation experiments confirm the practicability of DP-DSSP and the findings in this article.
Collapse
|
14
|
Jiang X, Qin S, Xue X. A penalty-like neurodynamic approach to constrained nonsmooth distributed convex optimization. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.050] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|