1
|
Chen D, Wang H, Hu D, Xian Q, Wu B. Q-learning improved golden jackal optimization algorithm and its application to reliability optimization of hydraulic system. Sci Rep 2024; 14:24587. [PMID: 39426995 PMCID: PMC11490539 DOI: 10.1038/s41598-024-75374-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 10/04/2024] [Indexed: 10/21/2024] Open
Abstract
To endow the prey with intelligent movement behavior and improve the performance of Golden Jackal Optimization (GJO), a Q-learning Improved Gold Jackal Optimization (QIGJO) algorithm is proposed. This paper introduces five update mechanisms and proposes double-population Q-learning collaborative mechanism to select appropriate update mechanisms to improve GJO performance. Additionally, a new convergence factor is incorporated to enhance convergence capability of GJO. QIGJO demonstrates excellent performance across 23 benchmark functions, CEC2022, and three classical engineering design problems, indicating high convergence accuracy and significantly enhanced global exploration capability. The reliability optimization model of the hydraulic system for concrete pump trucks was established based on a Continuous-time Multi-dimensional T-S dynamic Fault Tree (CM-TSdFT), considering the two-dimensional factors of operating time and number of impacts. Utilizing QIGJO to optimize this model yielded excellent results, providing valuable methodological support for reliability optimization of hydraulic systems.
Collapse
Affiliation(s)
- Dongning Chen
- School of Mechanical Engineering, Yanshan University, Qinhuangdao, 066004, China.
| | - Haowen Wang
- School of Mechanical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Dongbo Hu
- School of Mechanical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Qinggui Xian
- School of Mechanical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Bingyu Wu
- School of Mechanical Engineering, Yanshan University, Qinhuangdao, 066004, China
| |
Collapse
|
2
|
Yuan X, Wang Y, Liu J, Sun C. Action Mapping: A Reinforcement Learning Method for Constrained-Input Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7145-7157. [PMID: 35025751 DOI: 10.1109/tnnls.2021.3138924] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Existing approaches to constrained-input optimal control problems mainly focus on systems with input saturation, whereas other constraints, such as combined inequality constraints and state-dependent constraints, are seldom discussed. In this article, a reinforcement learning (RL)-based algorithm is developed for constrained-input optimal control of discrete-time (DT) systems. The deterministic policy gradient (DPG) is introduced to iteratively search the optimal solution to the Hamilton-Jacobi-Bellman (HJB) equation. To deal with input constraints, an action mapping (AM) mechanism is proposed. The objective of this mechanism is to transform the exploration space from the subspace generated by the given inequality constraints to the standard Cartesian product space, which can be searched effectively by existing algorithms. By using the proposed architecture, the learned policy can output control signals satisfying the given constraints, and the original reward function can be kept unchanged. In our study, the convergence analysis is given. It is shown that the iterative algorithm is convergent to the optimal solution of the HJB equation. In addition, the continuity of the iterative estimated Q -function is investigated. Two numerical examples are provided to demonstrate the effectiveness of our approach.
Collapse
|
3
|
Talebi F, Nazemi A, Ataabadi AA. Mean-AVaR in credibilistic portfolio management via an artificial neural network scheme. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2022.2153271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Affiliation(s)
- Fatemeh Talebi
- Faculty of Mathematical sciences, Shahrood University of Technology, Shahrood, Iran
| | - Alireza Nazemi
- Faculty of Mathematical sciences, Shahrood University of Technology, Shahrood, Iran
| | - Abdolmajid Abdolbaghi Ataabadi
- Department of Management, Faculty of Industrial Engineering and Management, Shahrood University of Technology, Shahrood, Iran
| |
Collapse
|
4
|
Sun M, Zhang Y, Wu Y, He X. On a Finitely Activated Terminal RNN Approach to Time-Variant Problem Solving. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7289-7302. [PMID: 34106866 DOI: 10.1109/tnnls.2021.3084740] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article concerns with terminal recurrent neural network (RNN) models for time-variant computing, featuring finite-valued activation functions (AFs), and finite-time convergence of error variables. Terminal RNNs stand for specific models that admit terminal attractors, and the dynamics of each neuron retains finite-time convergence. The might-existing imperfection in solving time-variant problems, through theoretically examining the asymptotically convergent RNNs, is pointed out for which the finite-time-convergent models are most desirable. The existing AFs are summarized, and it is found that there is a lack of the AFs that take only finite values. A finitely valued terminal RNN, among others, is taken into account, which involves only basic algebraic operations and taking roots. The proposed terminal RNN model is used to solve the time-variant problems undertaken, including the time-variant quadratic programming and motion planning of redundant manipulators. The numerical results are presented to demonstrate effectiveness of the proposed neural network, by which the convergence rate is comparable with that of the existing power-rate RNN.
Collapse
|
5
|
Feizi A, Nazemi A. Classifying random variables based on support vector machine and a neural network scheme. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2022.2104385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- Amir Feizi
- Faculty of Mathematical Sciences, Shahrood University of Technology, Shahrood, Iran
| | - Alireza Nazemi
- Faculty of Mathematical Sciences, Shahrood University of Technology, Shahrood, Iran
| |
Collapse
|
6
|
Yuan X, Dong L, Sun C. Solver-Critic: A Reinforcement Learning Method for Discrete-Time-Constrained-Input Systems. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:5619-5630. [PMID: 32203048 DOI: 10.1109/tcyb.2020.2978088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this article, a solver-critic (SC) architecture is developed for optimal control problems of discrete-time (DT)-constrained-input systems. The proposed design consists of three parts: 1) a critic network; 2) an action solver; and 3) a target network. The critic network first approximates the action-value function using the sum-of-squares (SOS) polynomial. Then, the action solver adopts the SOS programming to obtain control inputs within the constraint set. The target network introduces the soft update mechanism into policy evaluation to stabilize the learning process. By using the proposed architecture, the constrained-input control problem can be solved without adding the nonquadratic functionals into the reward function. In this article, the theoretical analysis of the convergence property is presented. Besides, the effects of both different initial Q -functions and different discount factors are investigated. It is proven that the learned policy converges to the optimal solution of the Hamilton-Jacobi-Bellman equation. Four numerical examples are provided to validate the theoretical analysis and also demonstrate the effectiveness of our approach.
Collapse
|
7
|
Affiliation(s)
- Sang Jun Moon
- Department of Statistics, University of Seoul, Seoul, South Korea
| | - Jong-June Jeon
- Department of Statistics, University of Seoul, Seoul, South Korea
| | | | - Yongdai Kim
- Department of Statistics, Seoul National University, Seoul, South Korea
| |
Collapse
|
8
|
Mohammadi S, Nazemi A. On portfolio management with value at risk and uncertain returns via an artificial neural network scheme. COGN SYST RES 2020. [DOI: 10.1016/j.cogsys.2019.09.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
9
|
Nazemi A, Sabeghi A. A new neural network framework for solving convex second-order cone constrained variational inequality problems with an application in multi-finger robot hands. J EXP THEOR ARTIF IN 2019. [DOI: 10.1080/0952813x.2019.1647559] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Alireza Nazemi
- Faculty of Mathematical Sciences, Shahrood University of Technology, Shahrood, Iran
| | - Atiye Sabeghi
- Faculty of Mathematical Sciences, Shahrood University of Technology, Shahrood, Iran
| |
Collapse
|
10
|
A new collaborate neuro-dynamic framework for solving convex second order cone programming problems with an application in multi-fingered robotic hands. APPL INTELL 2019. [DOI: 10.1007/s10489-019-01462-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
11
|
A new gradient-based neural dynamic framework for solving constrained min-max optimization problems with an application in portfolio selection models. APPL INTELL 2019. [DOI: 10.1007/s10489-018-1268-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
12
|
|
13
|
Feizi A, Nazemi A. An application of a practical neural network model for solving support vector regression problems. INTELL DATA ANAL 2017. [DOI: 10.3233/ida-163145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
14
|
Nazemi A. A Capable Neural Network Framework for Solving Degenerate Quadratic Optimization Problems with an Application in Image Fusion. Neural Process Lett 2017. [DOI: 10.1007/s11063-017-9640-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
15
|
Ebadi M, Hosseini A, Hosseini M. A projection type steepest descent neural network for solving a class of nonsmooth optimization problems. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
16
|
Che H, Li C, He X, Huang T. A recurrent neural network for adaptive beamforming and array correction. Neural Netw 2016; 80:110-7. [DOI: 10.1016/j.neunet.2016.04.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Revised: 03/08/2016] [Accepted: 04/22/2016] [Indexed: 10/21/2022]
|
17
|
Miao P, Shen Y, Li Y, Bao L. Finite-time recurrent neural networks for solving nonlinear optimization problems and their application. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.11.014] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
18
|
A Gradient-Based Neural Network Method for Solving Strictly Convex Quadratic Programming Problems. Cognit Comput 2014. [DOI: 10.1007/s12559-014-9249-0] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
19
|
Liu Q, Guo Z, Wang J. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization. Neural Netw 2012; 26:99-109. [DOI: 10.1016/j.neunet.2011.09.001] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2011] [Accepted: 09/01/2011] [Indexed: 11/29/2022]
|
20
|
|
21
|
Zhang Y, Xu S, Zeng Z. Novel robust stability criteria of discrete-time stochastic recurrent neural networks with time delay. Neurocomputing 2009. [DOI: 10.1016/j.neucom.2009.01.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
22
|
Barbarosou M, Maratos N. A Nonfeasible Gradient Projection Recurrent Neural Network for Equality-Constrained Optimization Problems. ACTA ACUST UNITED AC 2008; 19:1665-77. [DOI: 10.1109/tnn.2008.2000993] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
23
|
Youshen Xia, Gang Feng, Jun Wang. A Novel Recurrent Neural Network for Solving Nonlinear Optimization Problems With Inequality Constraints. ACTA ACUST UNITED AC 2008; 19:1340-53. [DOI: 10.1109/tnn.2008.2000273] [Citation(s) in RCA: 136] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
24
|
Abstract
Continuous-time neural networks for solving convex nonlinear unconstrained programming problems without using gradient information of the objective function are proposed and analyzed. Thus, the proposed networks are nonderivative optimizers. First, networks for optimizing objective functions of one variable are discussed. Then, an existing one-dimensional optimizer is analyzed, and a new line search optimizer is proposed. It is shown that the proposed optimizer network is robust in the sense that it has disturbance rejection property. The network can be implemented easily in hardware using standard circuit elements. The one-dimensional net is used as a building block in multidimensional networks for optimizing objective functions of several variables. The multidimensional nets implement a continuous version of the coordinate descent method.
Collapse
Affiliation(s)
- M M Teixeira
- Department of Electrical Engineering, FEIS/UNESP, 15385-000-Ilha Solteira-SP, Brazil
| | | |
Collapse
|
25
|
Xia Y, Wang J. A Recurrent Neural Network for Nonlinear Convex Optimization Subject to Nonlinear Inequality Constraints. ACTA ACUST UNITED AC 2004. [DOI: 10.1109/tcsi.2004.830694] [Citation(s) in RCA: 130] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
26
|
Abstract
Recently, a projection neural network has been shown to be a promising computational model for solving variational inequality problems with box constraints. This letter presents an extended projection neural network for solving monotone variational inequality problems with linear and nonlinear constraints. In particular, the proposed neural network can include the projection neural network as a special case. Compared with the modified projection-type methods for solving constrained monotone variational inequality problems, the proposed neural network has a lower complexity and is suitable for parallel implementation. Furthermore, the proposed neural network is theoretically proven to be exponentially convergent to an exact solution without a Lipschitz condition. Illustrative examples show that the extended projection neural network can be used to solve constrained monotone variational inequality problems.
Collapse
Affiliation(s)
- Youshen Xia
- Department of Applied Mathematics, Nanjing University of Posts and Telecommunications, China
| |
Collapse
|
27
|
Xia Y, Wang J. A General Projection Neural Network for Solving Monotone Variational Inequalities and Related Optimization Problems. ACTA ACUST UNITED AC 2004; 15:318-28. [PMID: 15384525 DOI: 10.1109/tnn.2004.824252] [Citation(s) in RCA: 103] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Recently, a projection neural network for solving monotone variational inequalities and constrained optimization problems was developed. In this paper, we propose a general projection neural network for solving a wider class of variational inequalities and related optimization problems. In addition to its simple structure and low complexity, the proposed neural network includes existing neural networks for optimization, such as the projection neural network, the primal-dual neural network, and the dual neural network, as special cases. Under various mild conditions, the proposed general projection neural network is shown to be globally convergent, globally asymptotically stable, and globally exponentially stable. Furthermore, several improved stability criteria on two special cases of the general projection neural network are obtained under weaker conditions. Simulation results demonstrate the effectiveness and characteristics of the proposed neural network.
Collapse
Affiliation(s)
- Youshen Xia
- Department of Applied Mathematics, Nanjing University of Posts and Telecommunications, Nanjing, China.
| | | |
Collapse
|
28
|
Heszberger Z, Bı́ró J. An optimization neural network model with time-dependent and lossy dynamics. Neurocomputing 2002. [DOI: 10.1016/s0925-2312(01)00656-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
29
|
Mladenov V, Mastorakis N. Design of two-dimensional recursive filters by using neural networks. ACTA ACUST UNITED AC 2001; 12:585-90. [DOI: 10.1109/72.925560] [Citation(s) in RCA: 39] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
30
|
Yee Leung, Kai-Zhou Chen, Yong-Chang Jiao, Xing-Bao Gao, Kwong Sak Leung. A new gradient-based neural network for solving linear and quadratic programming problems. ACTA ACUST UNITED AC 2001; 12:1074-83. [DOI: 10.1109/72.950137] [Citation(s) in RCA: 41] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
31
|
|
32
|
|
33
|
Chan HY, Zak SH. Real-time synthesis of sparsely interconnected neural associative memories. Neural Netw 1998; 11:749-759. [PMID: 12662813 DOI: 10.1016/s0893-6080(98)00015-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
The problem of implementing associative memories using sparsely interconnected generalized Brain-State-in-a-Box (gBSB) network is addressed in this paper. In particular, a "designer" neural network that synthesizes the associative memories is proposed. An upper bound on the time required for the designer network to reach a solution is determined. A neighborhood criterion with toroidal geometry for the cellular gBSB network is analyzed, in which the number of adjacent cells is independent of the generic cell location. A design method of neural associative memories with prespecified interconnecting weights is presented. The effectiveness of the proposed synthesis method is demonstrated with numerical examples.
Collapse
Affiliation(s)
- Hubert Y. Chan
- School of Electrical and Computer Engineering, Box 540, Purdue University, West Lafayette, USA
| | | |
Collapse
|
34
|
Youshen Xia, Jun Wang. A general methodology for designing globally convergent optimization neural networks. ACTA ACUST UNITED AC 1998; 9:1331-43. [DOI: 10.1109/72.728383] [Citation(s) in RCA: 181] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
35
|
|
36
|
Chan HY, Zak SH. On neural networks that design neural associative memories. IEEE TRANSACTIONS ON NEURAL NETWORKS 1997; 8:360-372. [PMID: 18255639 DOI: 10.1109/72.557674] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The design problem of generalized brain-state-in-a-box (GBSB) type associative memories is formulated as a constrained optimization program, and "designer" neural networks for solving the program in real time are proposed. The stability of the designer networks is analyzed using Barbalat's lemma. The analyzed and synthesized neural associative memories do not require symmetric weight matrices. Two types of the GBSB-based associative memories are analyzed, one when the network trajectories are constrained to reside in the hypercube [-1, 1](n) and the other type when the network trajectories are confined to stay in the hypercube [0, 1](n). Numerical examples and simulations are presented to illustrate the results obtained.
Collapse
Affiliation(s)
- H Y Chan
- Sch. of Electr. and Comput. Eng., Purdue Univ., West Lafayette, IN
| | | |
Collapse
|
37
|
Bhaya A, Kaszkurewicz E, Kozyakin VS. Existence and stability of a unique equilibrium in continuous-valued discrete-time asynchronous Hopfield neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS 1996; 7:620-628. [PMID: 18263459 DOI: 10.1109/72.501720] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
It is shown that the assumption of D-stability of the interconnection matrix, together with the standard assumptions on the activation functions, guarantee the existence of a unique equilibrium under a synchronous mode of operation as well as a class of asynchronous modes. For the synchronous mode, these assumptions are also shown to imply local asymptotic stability of the equilibrium. For the asynchronous mode of operation, two results are derived. First, it is shown that symmetry and stability of the interconnection matrix guarantee local asymptotic stability of the equilibrium under a class of asynchronous modes-this is referred to as local absolute asymptotic stability. Second, it is shown that, under the standard assumptions, if the nonnegative matrix whose elements are the absolute values of the corresponding elements of the interconnection matrix is stable, then the equilibrium is globally absolutely asymptotically stable under a class of asynchronous modes. The results obtained are discussed from the points of view of their applications, robustness, and their relationship to earlier results.
Collapse
Affiliation(s)
- A Bhaya
- Dept. of Electr. Eng., Univ. Federal do Rio de Janeiro
| | | | | |
Collapse
|
38
|
Perfetti R. Optimization neural network for solving flow problems. IEEE TRANSACTIONS ON NEURAL NETWORKS 1995; 6:1287-1291. [PMID: 18263420 DOI: 10.1109/72.410376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, it is easily programmable, and it is suited for analog very large scale integration (VLSI) realization. The functionality of the proposed network is illustrated by a simulation example concerning the maximal flow problem.
Collapse
|