1
|
Liu N, Jia W, Qin S. A smooth gradient approximation neural network for general constrained nonsmooth nonconvex optimization problems. Neural Netw 2025; 184:107121. [PMID: 39798354 DOI: 10.1016/j.neunet.2024.107121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 12/09/2024] [Accepted: 12/31/2024] [Indexed: 01/15/2025]
Abstract
Nonsmooth nonconvex optimization problems are pivotal in engineering practice due to the inherent nonsmooth and nonconvex characteristics of many real-world complex systems and models. The nonsmoothness and nonconvexity of the objective and constraint functions bring great challenges to the design and convergence analysis of the optimization algorithms. This paper presents a smooth gradient approximation neural network for such optimization problems, in which a smooth approximation technique with time-varying control parameter is introduced for handling nonsmooth nonregular objective functions. In addition, a hard comparator function is introduced to ensure that the state solution of the proposed neural network remains within the nonconvex inequality constraint sets. Any accumulation point of the state solution of the proposed neural network is proved to be a stationary point of the nonconvex optimization under consideration. Furthermore, the neural network demonstrates the ability to find optimal solutions for some generalized convex optimization problems. Compared with the related neural networks, the constructed neural network has weaker convergence conditions and simpler algorithm structure. Simulation results and an application in optimizing condition number verify the practical applicability of the presented algorithm.
Collapse
Affiliation(s)
- Na Liu
- School of Mathematical Sciences, Tianjin Normal University, Tianjin, China; Institute of Mathematics and Interdisciplinary Sciences, Tianjin Normal University, Tianjin, China.
| | - Wenwen Jia
- Department of Mathematics, Southeast University, Nanjing, China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| |
Collapse
|
2
|
Xia Y, Ye T, Huang L. Analysis and Application of Matrix-Form Neural Networks for Fast Matrix-Variable Convex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2259-2273. [PMID: 38157471 DOI: 10.1109/tnnls.2023.3340730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Matrix-variable optimization is a generalization of vector-variable optimization and has been found to have many important applications. To reduce computation time and storage requirement, this article presents two matrix-form recurrent neural networks (RNNs), one continuous-time model and another discrete-time model, for solving matrix-variable optimization problems with linear constraints. The two proposed matrix-form RNNs have low complexity and are suitable for parallel implementation in terms of matrix state space. The proposed continuous-time matrix-form RNN can significantly generalize existing continuous-time vector-form RNN. The proposed discrete-time matrix-form RNN can be effectively used in blind image restoration, where the storage requirement and computational cost are largely reduced. Theoretically, the two proposed matrix-form RNNs are guaranteed to be globally convergent to the optimal solution under mild conditions. Computed results show that the proposed matrix-form RNN-based algorithm is superior to related vector-form RNN and matrix-form RNN-based algorithms, in terms of computation time.
Collapse
|
3
|
Zhang M, He X. A continuous-time neurodynamic approach in matrix form for rank minimization. Neural Netw 2024; 172:106128. [PMID: 38242008 DOI: 10.1016/j.neunet.2024.106128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 12/07/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
This article proposes a continuous-time neurodynamic approach for solving the rank minimization under affine constraints. As opposed to the traditional neurodynamic approach, the proposed neurodynamic approach extends the form of the variables from the vector form to the matrix form. First, a continuous-time neurodynamic approach with variables in matrix form is developed by combining the optimal rank r projection and the gradient. Then, the optimality of the proposed neurodynamic approach is rigorously analyzed by demonstrating that the objective function satisfies the functional property which is called as (2r,4r)-restricted strong convexity and smoothness ((2r,4r)-RSCS). Furthermore, the convergence and stability analysis of the proposed neurodynamic approach is rigorously conducted by establishing appropriate Lyapunov functions and considering the relevant restricted isometry property (RIP) condition associated with the affine transformation. Finally, through experiments involving low-rank matrix recovery under affine transformations and the completion of low-rank real image, the effectiveness of this approach has been demonstrated, along with its superiority compared to the vector-based approach.
Collapse
Affiliation(s)
- Meng Zhang
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, 400715, Chongqing, China.
| | - Xing He
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, 400715, Chongqing, China.
| |
Collapse
|
4
|
Ma Y, Dai Y. Stability and Hopf bifurcation analysis of a fractional-order ring-hub structure neural network with delays under parameters delay feedback control. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:20093-20115. [PMID: 38052638 DOI: 10.3934/mbe.2023890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
In this paper, a fractional-order two delays neural network with ring-hub structure is investigated. Firstly, the stability and the existence of Hopf bifurcation of proposed system are obtained by taking the sum of two delays as the bifurcation parameter. Furthermore, a parameters delay feedback controller is introduced to control successfully Hopf bifurcation. The novelty of this paper is that the characteristic equation corresponding to system has two time delays and the parameters depend on one of them. Selecting two time delays as the bifurcation parameters simultaneously, stability switching curves in $ (\tau_{1}, \tau_{2}) $ plane and crossing direction are obtained. Sufficient criteria for the stability and the existence of Hopf bifurcation of controlled system are given. Ultimately, numerical simulation shows that parameters delay feedback controller can effectively control Hopf bifurcation of system.
Collapse
Affiliation(s)
- Yuan Ma
- Department of System Science and Applied Mathematics, Kunming University of Science and Technology, Kunming 650500, China
| | - Yunxian Dai
- Department of System Science and Applied Mathematics, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|
5
|
Xia Y, Wang J, Lu Z, Huang L. Two Recurrent Neural Networks With Reduced Model Complexity for Constrained l₁-Norm Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:6173-6185. [PMID: 34986103 DOI: 10.1109/tnnls.2021.3133836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Because of the robustness and sparsity performance of least absolute deviation (LAD or l1 ) optimization, developing effective solution methods becomes an important topic. Recurrent neural networks (RNNs) are reported to be capable of effectively solving constrained l1 -norm optimization problems, but their convergence speed is limited. To accelerate the convergence, this article introduces two RNNs, in form of continuous- and discrete-time systems, for solving l1 -norm optimization problems with linear equality and inequality constraints. The RNNs are theoretically proven to be globally convergent to optimal solutions without any condition. With reduced model complexity, the two RNNs can significantly expedite constrained l1 -norm optimization. Numerical simulation results show that the two RNNs spend much less computational time than related RNNs and numerical optimization algorithms for linearly constrained l1 -norm optimization.
Collapse
|
6
|
Chen J, Xiao M, Wan Y, Huang C, Xu F. Dynamical Bifurcation for a Class of Large-Scale Fractional Delayed Neural Networks With Complex Ring-Hub Structure and Hybrid Coupling. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2659-2669. [PMID: 34495847 DOI: 10.1109/tnnls.2021.3107330] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Real neural networks are characterized by large-scale and complex topology. However, the current dynamical analysis is limited to low-dimensional models with simplified topology. Therefore, there is still a huge gap between neural network theory and its application. This article proposes a class of large-scale neural networks with a ring-hub structure, where a hub node is connected to n peripheral nodes and these peripheral nodes are linked by a ring. In particular, there exists a hybrid coupling mode in the network topology. The mathematical model of such systems is described by fractional-order delayed differential equations. The aim of this article is to investigate the local stability and Hopf bifurcation of this high-dimensional neural network. First, the Coates flow graph is employed to obtain the characteristic equation of the linearized high-dimensional neural network model, which is a transcendental equation including multiple exponential items. Then, the sufficient conditions ensuring the stability of equilibrium and the existence of Hopf bifurcation are achieved by taking time delay as a bifurcation parameter. Finally, some numerical examples are given to support the theoretical results. It is revealed that the increasing time delay can effectively induce the occurrence of periodic oscillation. Moreover, the fractional order, the self-feedback coefficient, and the number of neurons also have effects on the onset of Hopf bifurcation.
Collapse
|
7
|
Liu J, Liao X. A Projection Neural Network to Nonsmooth Constrained Pseudoconvex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2001-2015. [PMID: 34464277 DOI: 10.1109/tnnls.2021.3105732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, a single-layer projection neural network based on penalty function and differential inclusion is proposed to solve nonsmooth pseudoconvex optimization problems with linear equality and convex inequality constraints, and the bound constraints, such as box and sphere types, in inequality constraints are processed by projection operator. By introducing the Tikhonov-like regularization method, the proposed neural network no longer needs to calculate the exact penalty parameters. Under mild assumptions, by nonsmooth analysis, it is proved that the state solution of the proposed neural network is always bounded and globally exists, and enters the constrained feasible region in a finite time, and never escapes from this region again. Finally, the state solution converges to an optimal solution for the considered optimization problem. Compared with some other existing neural networks based on subgradients, this algorithm eliminates the dependence on the selection of the initial point, which is a neural network model with a simple structure and low calculation load. Three numerical experiments and two application examples are used to illustrate the global convergence and effectiveness of the proposed neural network.
Collapse
|
8
|
Liu J, Liao X, Dong JS, Mansoori A. A neurodynamic approach for nonsmooth optimal power consumption of intelligent and connected vehicles. Neural Netw 2023; 161:693-707. [PMID: 36848825 DOI: 10.1016/j.neunet.2023.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 11/30/2022] [Accepted: 02/07/2023] [Indexed: 02/17/2023]
Abstract
This paper investigates a class of power consumption minimization and equalization for intelligent and connected vehicles cooperative system. Accordingly, a distributed optimization problem model related to power consumption and data rate of intelligent and connected vehicles is presented, where the power consumption cost function of each intelligent and connected vehicle may be nonsmooth, and the corresponding control variable is subject to the constraints generated by data acquisition, compression coding, transmission and reception. We propose a distributed subgradient-based neurodynamic approach with projection operator to achieve the optimal power consumption of intelligent and connected vehicles. By differential inclusion and nonsmooth analysis, it is confirmed that the state solution of neurodynamic system converges to the optimal solution of the distributed optimization problem. With the help of the algorithm, all intelligent and connected vehicles asymptotically reach a consensus on an optimal power consumption. Simulation results show that the proposed neurodynamic approach is capable of effectively solving the problem of power consumption optimal control for intelligent and connected vehicles cooperative system.
Collapse
Affiliation(s)
- Jingxin Liu
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China; School of Computing, National University of Singapore, Singapore 117417, Singapore.
| | - Xiaofeng Liao
- Key Laboratory of Dependable Services Computing in Cyber-Physical Society (Chongqing) Ministry of Education, College of Computer, Chongqing University, Chongqing 400044, China.
| | - Jin-Song Dong
- School of Computing, National University of Singapore, Singapore 117417, Singapore.
| | - Amin Mansoori
- Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad 9177948974, Iran; International UNESCO Center for Health Related Basic Sciences and Human Nutrition, Mashhad University of Medical Sciences, Mashhad 9177948974, Iran.
| |
Collapse
|
9
|
Yang Y, Wu Y, Hou M, Luo J, Xie X. Solving Emden–Fowler Equations Using Improved Extreme Learning Machine Algorithm Based on Block Legendre Basis Neural Network. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11254-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
10
|
Liu J, Liao X, Dong JS, Mansoori A. A subgradient-based neurodynamic algorithm to constrained nonsmooth nonconvex interval-valued optimization. Neural Netw 2023; 160:259-273. [PMID: 36709530 DOI: 10.1016/j.neunet.2023.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 12/20/2022] [Accepted: 01/11/2023] [Indexed: 01/21/2023]
Abstract
In this paper, a subgradient-based neurodynamic algorithm is presented to solve the nonsmooth nonconvex interval-valued optimization problem with both partial order and linear equality constraints, where the interval-valued objective function is nonconvex, and interval-valued partial order constraint functions are convex. The designed neurodynamic system is constructed by a differential inclusion with upper semicontinuous right-hand side, whose calculation load is reduced by relieving penalty parameters estimation and complex matrix inversion. Based on nonsmooth analysis and the extension theorem of the solution of differential inclusion, it is obtained that the global existence and boundedness of state solution of neurodynamic system, as well as the asymptotic convergence of state solution to the feasible region and the set of LU-critical points of interval-valued nonconvex optimization problem. Several numerical experiments and the applications to emergency supplies distribution and nondeterministic fractional continuous static games are solved to illustrate the applicability of the proposed neurodynamic algorithm.
Collapse
Affiliation(s)
- Jingxin Liu
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China; School of Computing, National University of Singapore, Singapore 117417, Singapore.
| | - Xiaofeng Liao
- Key Laboratory of Dependable Services Computing in Cyber-Physical Society (Chongqing) Ministry of Education, College of Computer, Chongqing University, Chongqing 400044, China.
| | - Jin-Song Dong
- School of Computing, National University of Singapore, Singapore 117417, Singapore.
| | - Amin Mansoori
- Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad 9177948974, Iran; International UNESCO Center for Health Related Basic Sciences and Human Nutrition, Mashhad University of Medical Sciences, Mashhad 9177948974, Iran.
| |
Collapse
|
11
|
Mohammadi M, Atashin AA, Tamburri DA. From ℓ1 subgradient to projection: A compact neural network for ℓ1-regularized logistic regression. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
12
|
Vural NM, Ilhan F, Yilmaz SF, Ergut S, Kozat SS. Achieving Online Regression Performance of LSTMs With Simple RNNs. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7632-7643. [PMID: 34138720 DOI: 10.1109/tnnls.2021.3086029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recurrent neural networks (RNNs) are widely used for online regression due to their ability to generalize nonlinear temporal dependencies. As an RNN model, long short-term memory networks (LSTMs) are commonly preferred in practice, as these networks are capable of learning long-term dependencies while avoiding the vanishing gradient problem. However, due to their large number of parameters, training LSTMs requires considerably longer training time compared to simple RNNs (SRNNs). In this article, we achieve the online regression performance of LSTMs with SRNNs efficiently. To this end, we introduce a first-order training algorithm with a linear time complexity in the number of parameters. We show that when SRNNs are trained with our algorithm, they provide very similar regression performance with the LSTMs in two to three times shorter training time. We provide strong theoretical analysis to support our experimental results by providing regret bounds on the convergence rate of our algorithm. Through an extensive set of experiments, we verify our theoretical work and demonstrate significant performance improvements of our algorithm with respect to LSTMs and the other state-of-the-art learning models.
Collapse
|
13
|
He X, Wen H, Huang T. A Fixed-Time Projection Neural Network for Solving L₁-Minimization Problem. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7818-7828. [PMID: 34166204 DOI: 10.1109/tnnls.2021.3088535] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, a new projection neural network (PNN) for solving L1 -minimization problem is proposed, which is based on classic PNN and sliding mode control technique. Furthermore, the proposed network can be used to make sparse signal reconstruction and image reconstruction. First, a sign function is introduced into the PNN model to design fixed-time PNN (FPNN). Then, under the condition that the projection matrix satisfies the restricted isometry property (RIP), the stability and fixed-time convergence of the proposed FPNN are proved by the Lyapunov method. Finally, based on the experimental results of signal simulation and image reconstruction, the proposed FPNN shows the effectiveness and superiority compared with that of the existing PNNs.
Collapse
|
14
|
Sun M, Zhang Y, Wu Y, He X. On a Finitely Activated Terminal RNN Approach to Time-Variant Problem Solving. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7289-7302. [PMID: 34106866 DOI: 10.1109/tnnls.2021.3084740] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article concerns with terminal recurrent neural network (RNN) models for time-variant computing, featuring finite-valued activation functions (AFs), and finite-time convergence of error variables. Terminal RNNs stand for specific models that admit terminal attractors, and the dynamics of each neuron retains finite-time convergence. The might-existing imperfection in solving time-variant problems, through theoretically examining the asymptotically convergent RNNs, is pointed out for which the finite-time-convergent models are most desirable. The existing AFs are summarized, and it is found that there is a lack of the AFs that take only finite values. A finitely valued terminal RNN, among others, is taken into account, which involves only basic algebraic operations and taking roots. The proposed terminal RNN model is used to solve the time-variant problems undertaken, including the time-variant quadratic programming and motion planning of redundant manipulators. The numerical results are presented to demonstrate effectiveness of the proposed neural network, by which the convergence rate is comparable with that of the existing power-rate RNN.
Collapse
|
15
|
Chen G, Liu ZP. Inferring causal gene regulatory network via GreyNet: From dynamic grey association to causation. Front Bioeng Biotechnol 2022; 10:954610. [PMID: 36237217 PMCID: PMC9551017 DOI: 10.3389/fbioe.2022.954610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 08/15/2022] [Indexed: 11/23/2022] Open
Abstract
Gene regulatory network (GRN) provides abundant information on gene interactions, which contributes to demonstrating pathology, predicting clinical outcomes, and identifying drug targets. Existing high-throughput experiments provide rich time-series gene expression data to reconstruct the GRN to further gain insights into the mechanism of organisms responding to external stimuli. Numerous machine-learning methods have been proposed to infer gene regulatory networks. Nevertheless, machine learning, especially deep learning, is generally a “black box,” which lacks interpretability. The causality has not been well recognized in GRN inference procedures. In this article, we introduce grey theory integrated with the adaptive sliding window technique to flexibly capture instant gene–gene interactions in the uncertain regulatory system. Then, we incorporate generalized multivariate Granger causality regression methods to transform the dynamic grey association into causation to generate directional regulatory links. We evaluate our model on the DREAM4 in silico benchmark dataset and real-world hepatocellular carcinoma (HCC) time-series data. We achieved competitive results on the DREAM4 compared with other state-of-the-art algorithms and gained meaningful GRN structure on HCC data respectively.
Collapse
Affiliation(s)
- Guangyi Chen
- Department of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - Zhi-Ping Liu
- Department of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
- Center for Intelligent Medicine, Shandong University, Jinan, Shandong, China
- *Correspondence: Zhi-Ping Liu,
| |
Collapse
|
16
|
Wang ZQ, Li LJ, Chao F, Lin CM, Yang L, Zhou C, Chang X, Shang C, Shen Q. A Type 2 wavelet brain emotional learning network with double recurrent loops based controller for nonlinear systems. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
17
|
Sparse signal reconstruction via recurrent neural networks with hyperbolic tangent function. Neural Netw 2022; 153:1-12. [DOI: 10.1016/j.neunet.2022.05.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 05/21/2022] [Accepted: 05/24/2022] [Indexed: 11/22/2022]
|
18
|
Meng Y, Chen J, Li Z, Zhang Y, Liang L, Zhu J. Soft sensor with deep feature extraction for a sugarcane milling system. J FOOD PROCESS ENG 2022. [DOI: 10.1111/jfpe.14066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Yanmei Meng
- College of Mechanical Engineering Guangxi University Nanning China
| | - Jie Chen
- College of Mechanical Engineering Guangxi University Nanning China
| | - Zhengyuan Li
- College of Mechanical Engineering Guangxi University Nanning China
| | - Yue Zhang
- College of Mechanical Engineering Guangxi University Nanning China
| | | | - Jihong Zhu
- College of Mechanical Engineering Guangxi University Nanning China
- Department of Precision Instrument Tsinghua University Beijing China
| |
Collapse
|
19
|
Hu D, He X, Ju X. A modified projection neural network with fixed-time convergence. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
20
|
Gan Y, Hu X, Zou G, Yan C, Xu G. Inferring Gene Regulatory Networks From Single-Cell Transcriptomic Data Using Bidirectional RNN. Front Oncol 2022; 12:899825. [PMID: 35692809 PMCID: PMC9178250 DOI: 10.3389/fonc.2022.899825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Accepted: 04/22/2022] [Indexed: 11/30/2022] Open
Abstract
Accurate inference of gene regulatory rules is critical to understanding cellular processes. Existing computational methods usually decompose the inference of gene regulatory networks (GRNs) into multiple subproblems, rather than detecting potential causal relationships simultaneously, which limits the application to data with a small number of genes. Here, we propose BiRGRN, a novel computational algorithm for inferring GRNs from time-series single-cell RNA-seq (scRNA-seq) data. BiRGRN utilizes a bidirectional recurrent neural network to infer GRNs. The recurrent neural network is a complex deep neural network that can capture complex, non-linear, and dynamic relationships among variables. It maps neurons to genes, and maps the connections between neural network layers to the regulatory relationship between genes, providing an intuitive solution to model GRNs with biological closeness and mathematical flexibility. Based on the deep network, we transform the inference of GRNs into a regression problem, using the gene expression data at previous time points to predict the gene expression data at the later time point. Furthermore, we adopt two strategies to improve the accuracy and stability of the algorithm. Specifically, we utilize a bidirectional structure to integrate the forward and reverse inference results and exploit an incomplete set of prior knowledge to filter out some candidate inferences of low confidence. BiRGRN is applied to four simulated datasets and three real scRNA-seq datasets to verify the proposed method. We perform comprehensive comparisons between our proposed method with other state-of-the-art techniques. These experimental results indicate that BiRGRN is capable of inferring GRN simultaneously from time-series scRNA-seq data. Our method BiRGRN is implemented in Python using the TensorFlow machine-learning library, and it is freely available at https://gitee.com/DHUDBLab/bi-rgrn.
Collapse
Affiliation(s)
- Yanglan Gan
- School of Computer Science and Technology, Donghua University, Shanghai, China
| | - Xin Hu
- School of Computer Science and Technology, Donghua University, Shanghai, China
| | - Guobing Zou
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Cairong Yan
- School of Computer Science and Technology, Donghua University, Shanghai, China
| | - Guangwei Xu
- School of Computer Science and Technology, Donghua University, Shanghai, China
| |
Collapse
|
21
|
Qiu B, Guo J, Li X, Zhang Z, Zhang Y. Discrete-Time Advanced Zeroing Neurodynamic Algorithm Applied to Future Equality-Constrained Nonlinear Optimization With Various Noises. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3539-3552. [PMID: 32759087 DOI: 10.1109/tcyb.2020.3009110] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This research first proposes the general expression of Zhang et al. discretization (ZeaD) formulas to provide an effective general framework for finding various ZeaD formulas by the idea of high-order derivative simultaneous elimination. Then, to solve the problem of future equality-constrained nonlinear optimization (ECNO) with various noises, a specific ZeaD formula originating from the general ZeaD formula is further studied for the discretization of a noise-perturbed continuous-time advanced zeroing neurodynamic model. Subsequently, the resulting noise-perturbed discrete-time advanced zeroing neurodynamic (NP-DTAZN) algorithm is proposed for the real-time solution to the future ECNO problem with various noises suppressed simultaneously. Moreover, theoretical and numerical results are presented to show the convergence and precision of the proposed NP-DTAZN algorithm in the perturbation of various noises. Finally, comparative numerical and physical experiments based on a Kinova JACO2 robot manipulator are conducted to further substantiate the efficacy, superiority, and practicability of the proposed NP-DTAZN algorithm for solving the future ECNO problem with various noises.
Collapse
|
22
|
Liu N, Wang J, Qin S. A one-layer recurrent neural network for nonsmooth pseudoconvex optimization with quasiconvex inequality and affine equality constraints. Neural Netw 2021; 147:1-9. [PMID: 34953297 DOI: 10.1016/j.neunet.2021.12.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 11/10/2021] [Accepted: 12/02/2021] [Indexed: 10/19/2022]
Abstract
As two important types of generalized convex functions, pseudoconvex and quasiconvex functions appear in many practical optimization problems. The lack of convexity poses some difficulties in solving pseudoconvex optimization with quasiconvex constraint functions. In this paper, we propose a one-layer recurrent neural network for solving such problems. We prove that the state of the proposed neural network is convergent from the feasible region to an optimal solution of the given optimization problem. We show that the proposed neural network has several advantages over the existing neural networks for pseudoconvex optimization. Specifically, the proposed neural network is applicable to optimization problems with quasiconvex inequality constraints as well as affine equality constraints. In addition, parameter matrix inversion is avoided and some assumptions on the objective function and inequality constraints in existing results are relaxed. We demonstrate the superior performance and characteristics of the proposed neural network with simulation results in three numerical examples.
Collapse
Affiliation(s)
- Na Liu
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Hong Kong.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, 264209, China.
| |
Collapse
|
23
|
Liu N, Zhao S, Qin S. A power reformulation continuous-time algorithm for nonconvex distributed constrained optimization over multi-agent systems. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.082] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
24
|
Wen X, Luan L, Qin S. A continuous-time neurodynamic approach and its discretization for distributed convex optimization over multi-agent systems. Neural Netw 2021; 143:52-65. [PMID: 34087529 DOI: 10.1016/j.neunet.2021.05.020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/25/2021] [Accepted: 05/17/2021] [Indexed: 10/21/2022]
Abstract
Distributed optimization problem (DOP) over multi-agent systems, which can be described by minimizing the sum of agents' local objective functions, has recently attracted widespread attention owing to its applications in diverse domains. In this paper, inspired by penalty method and subgradient descent method, a continuous-time neurodynamic approach is proposed for solving a DOP with inequality and set constraints. The state of continuous-time neurodynamic approach exists globally and converges to an optimal solution of the considered DOP. Comparisons reveal that the proposed neurodynamic approach can not only resolve more general convex DOPs, but also has lower dimension of solution space. Additionally, the discretization of the neurodynamic approach is also introduced for the convenience of implementation in practice. The iteration sequence of discrete-time method is also convergent to an optimal solution of DOP from any initial point. The effectiveness of the neurodynamic approach is verified by simulation examples and an application in L1-norm minimization problem in the end.
Collapse
Affiliation(s)
- Xingnan Wen
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Linhua Luan
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| |
Collapse
|
25
|
Two Matrix-Type Projection Neural Networks for Matrix-Valued Optimization with Application to Image Restoration. Neural Process Lett 2021. [DOI: 10.1007/s11063-019-10086-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
26
|
Joshi A, Rienks M, Theofilatos K, Mayr M. Systems biology in cardiovascular disease: a multiomics approach. Nat Rev Cardiol 2021; 18:313-330. [PMID: 33340009 DOI: 10.1038/s41569-020-00477-1] [Citation(s) in RCA: 128] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/02/2020] [Indexed: 12/13/2022]
Abstract
Omics techniques generate large, multidimensional data that are amenable to analysis by new informatics approaches alongside conventional statistical methods. Systems theories, including network analysis and machine learning, are well placed for analysing these data but must be applied with an understanding of the relevant biological and computational theories. Through applying these techniques to omics data, systems biology addresses the problems posed by the complex organization of biological processes. In this Review, we describe the techniques and sources of omics data, outline network theory, and highlight exemplars of novel approaches that combine gene regulatory and co-expression networks, proteomics, metabolomics, lipidomics and phenomics with informatics techniques to provide new insights into cardiovascular disease. The use of systems approaches will become necessary to integrate data from more than one omic technique. Although understanding the interactions between different omics data requires increasingly complex concepts and methods, we argue that hypothesis-driven investigations and independent validation must still accompany these novel systems biology approaches to realize their full potential.
Collapse
Affiliation(s)
- Abhishek Joshi
- King's British Heart Foundation Centre, King's College London, London, UK
- Bart's Heart Centre, St. Bartholomew's Hospital, London, UK
| | - Marieke Rienks
- King's British Heart Foundation Centre, King's College London, London, UK
| | | | - Manuel Mayr
- King's British Heart Foundation Centre, King's College London, London, UK.
| |
Collapse
|
27
|
Wen X, Wang Y, Qin S. A nonautonomous-differential-inclusion neurodynamic approach for nonsmooth distributed optimization on multi-agent systems. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06026-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
28
|
Tian F, Yu W, Fu J, Gu W, Gu J. Distributed Optimization of Multiagent Systems Subject to Inequality Constraints. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:2232-2241. [PMID: 31329156 DOI: 10.1109/tcyb.2019.2927725] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we study a distributed convex optimization problem with inequality constraints. Each agent is associated with its cost function, and can only exchange information with its neighbors. It is assumed that each cost function is convex and the optimization variable is subject to an inequality constraint. The objective is to make all the agents reach consensus, and meanwhile converge to the minimum point of the sum of local cost functions. A distributed protocol is proposed to guarantee that all agents can reach consensus in finite time and converge to the optimal point within the inequality constraints. Based on the ideas of parameter projection, the protocol includes two decent directions. One makes the cost function decrease, and the other makes agents step forward to the constraint set. It is shown that the proposed protocol solves the problem under connected undirected graphs without using a Lagrange multiplier technique. Especially, all of the agents could reach the constraint sets in finite time and stay in there after. The method could also be used in the centralized optimization problems.
Collapse
|
29
|
Zhu M, Yang Q, Dong J, Zhang G, Gou X, Rong H, Paul P, Neri F. An Adaptive Optimization Spiking Neural P System for Binary Problems. Int J Neural Syst 2020; 31:2050054. [PMID: 32938261 DOI: 10.1142/s0129065720500549] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Optimization Spiking Neural P System (OSNPS) is the first membrane computing model to directly derive an approximate solution of combinatorial problems with a specific reference to the 0/1 knapsack problem. OSNPS is composed of a family of parallel Spiking Neural P Systems (SNPS) that generate candidate solutions of the binary combinatorial problem and a Guider algorithm that adjusts the spiking probabilities of the neurons of the P systems. Although OSNPS is a pioneering structure in membrane computing optimization, its performance is competitive with that of modern and sophisticated metaheuristics for the knapsack problem only in low dimensional cases. In order to overcome the limitations of OSNPS, this paper proposes a novel Dynamic Guider algorithm which employs an adaptive learning and a diversity-based adaptation to control its moving operators. The resulting novel membrane computing model for optimization is here named Adaptive Optimization Spiking Neural P System (AOSNPS). Numerical result shows that the proposed approach is effective to solve the 0/1 knapsack problems and outperforms multiple various algorithms proposed in the literature to solve the same class of problems even for a large number of items (high dimensionality). Furthermore, case studies show that a AOSNPS is effective in fault sections estimation of power systems in different types of fault cases: including a single fault, multiple faults and multiple faults with incomplete and uncertain information in the IEEE 39 bus system and IEEE 118 bus system.
Collapse
Affiliation(s)
- Ming Zhu
- School of Control Engineering, Chengdu University of Information Technology, Chengdu 610225, P. R. China
| | - Qiang Yang
- School of Control Engineering, Chengdu University of Information Technology, Chengdu 610225, P. R. China
| | - Jianping Dong
- College of Information Science and Technology, Chengdu University of Technology, Chengdu 610059, P. R. China
| | - Gexiang Zhang
- College of Information Science and Technology, Chengdu University of Technology, Chengdu 610059, P. R. China
| | - Xiantai Gou
- School of Electrical Engineering, Southwest Jiaotong University, Chengdu 610031, P. R. China
| | - Haina Rong
- School of Electrical Engineering, Southwest Jiaotong University, Chengdu 610031, P. R. China
| | - Prithwineel Paul
- School of Electrical Engineering, Southwest Jiaotong University, Chengdu 610031, P. R. China
| | - Ferrante Neri
- COL Laboratory, School of Computer Science, University of Nottingham, Nottingham, UK
| |
Collapse
|
30
|
Ju X, Li C, He X, Feng G. An inertial projection neural network for solving inverse variational inequalities. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.023] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
31
|
Li W, Bian W, Xue X. Projected Neural Network for a Class of Non-Lipschitz Optimization Problems With Linear Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:3361-3373. [PMID: 31689212 DOI: 10.1109/tnnls.2019.2944388] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this article, we consider a class of nonsmooth, nonconvex, and non-Lipschitz optimization problems, which have wide applications in sparse optimization. We generalize the Clarke stationary point and define a kind of generalized stationary point of the problems with a stronger optimal capability. Based on the smoothing method, we propose a projected neural network for solving this kind of optimization problem. Under the condition that the level set of objective function in the feasible region is bounded, we prove that the solution of the proposed neural network is globally existent and bounded. The uniqueness of the solution of the proposed network is also analyzed. When the feasible region is bounded, any accumulation point of the proposed neural network is a generalized stationary point of the optimization model. Based on some suitable conditions, any solution of the proposed neural network is asymptotic convergent to one stationary point. In particular, we give some deep analysis on the proposed network for solving a special class of the non-Lipschitz optimization problem, which indicates a lower bound property and the unify identification for the nonzero elements of all accumulation points. Finally, some numerical results are presented to show the efficiency of the proposed neural network for solving some kinds of sparse optimization models.
Collapse
|
32
|
Mohammadi M. A Projection Neural Network for the Generalized Lasso. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2217-2221. [PMID: 31398133 DOI: 10.1109/tnnls.2019.2927282] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The generalized lasso (GLasso) is an extension of the lasso regression in which there is an l1 penalty term (or regularization) of the linearly transformed coefficient vector. Finding the optimal solution of GLasso is not straightforward since the penalty term is not differentiable. This brief presents a novel one-layer neural network to solve the generalized lasso for a wide range of penalty transformation matrices. The proposed neural network is proven to be stable in the sense of Lyapunov and converges globally to the optimal solution of the GLasso. It is also shown that the proposed neural solution can solve many optimization problems, including sparse and weighted sparse representations, (weighted) total variation denoising, fused lasso signal approximator, and trend filtering. Disparate experiments on the above problems illustrate and confirm the excellent performance of the proposed neural network in comparison to other competing techniques.
Collapse
|
33
|
Xia Y, Wang J, Guo W. Two Projection Neural Networks With Reduced Model Complexity for Nonlinear Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2020-2029. [PMID: 31425123 DOI: 10.1109/tnnls.2019.2927639] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recent reports show that projection neural networks with a low-dimensional state space can enhance computation speed obviously. This paper proposes two projection neural networks with reduced model dimension and complexity (RDPNNs) for solving nonlinear programming (NP) problems. Compared with existing projection neural networks for solving NP, the proposed two RDPNNs have a low-dimensional state space and low model complexity. Under the condition that the Hessian matrix of the associated Lagrangian function is positive semi-definite and positive definite at each Karush-Kuhn-Tucker point, the proposed two RDPNNs are proven to be globally stable in the sense of Lyapunov and converge globally to a point satisfying the reduced optimality condition of NP. Therefore, the proposed two RDPNNs are theoretically guaranteed to solve convex NP problems and a class of nonconvex NP problems. Computed results show that the proposed two RDPNNs have a faster computation speed than the existing projection neural networks for solving NP problems.
Collapse
|
34
|
Yu X, Wu L, Xu C, Hu Y, Ma C. A Novel Neural Network for Solving Nonsmooth Nonconvex Optimization Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1475-1488. [PMID: 31265412 DOI: 10.1109/tnnls.2019.2920408] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this paper, a novel recurrent neural network (RNN) is presented to deal with a kind of nonsmooth nonconvex optimization problem in which the objective function may be nonsmooth and nonconvex, and the constraints include linear equations and convex inequations. Under certain suitable assumptions, from an arbitrary initial state, each solution to the proposed RNN exists globally and is bounded, and it enters the feasible region within a limited time. Moreover, the solution to the RNN with an arbitrary initial state can converge to the critical point set of the optimization problem. In particular, the RNN does not need the following: 1) abounded feasible region; 2) the computation of an exact penalty parameter; or 3) the initial state being chosen from a given bounded set. Numerical experiments are provided to show the effectiveness and advantages of the RNN.
Collapse
|
35
|
Xu C, Chai Y, Qin S, Wang Z, Feng J. A neurodynamic approach to nonsmooth constrained pseudoconvex optimization problem. Neural Netw 2020; 124:180-192. [DOI: 10.1016/j.neunet.2019.12.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2019] [Revised: 11/15/2019] [Accepted: 12/14/2019] [Indexed: 10/25/2022]
|
36
|
Zhu Y, Yu W, Wen G, Chen G. Projected Primal-Dual Dynamics for Distributed Constrained Nonsmooth Convex Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:1776-1782. [PMID: 30530351 DOI: 10.1109/tcyb.2018.2883095] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A distributed nonsmooth convex optimization problem subject to a general type of constraint, including equality and inequality as well as bounded constraints, is studied in this paper for a multiagent network with a fixed and connected communication topology. To collectively solve such a complex optimization problem, primal-dual dynamics with projection operation are investigated under optimal conditions. For the nonsmooth convex optimization problem, a framework under the LaSalle's invariance principle from nonsmooth analysis is established, where the asymptotic stability of the primal-dual dynamics at an optimal solution is guaranteed. For the case where inequality and bounded constraints are not involved and the objective function is twice differentiable and strongly convex, the globally exponential convergence of the primal-dual dynamics is established. Finally, two simulations are provided to verify and visualize the theoretical results.
Collapse
|
37
|
Liu Y, Zheng Y, Lu J, Cao J, Rutkowski L. Constrained Quaternion-Variable Convex Optimization: A Quaternion-Valued Recurrent Neural Network Approach. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1022-1035. [PMID: 31247564 DOI: 10.1109/tnnls.2019.2916597] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper proposes a quaternion-valued one-layer recurrent neural network approach to resolve constrained convex function optimization problems with quaternion variables. Leveraging the novel generalized Hamilton-real (GHR) calculus, the quaternion gradient-based optimization techniques are proposed to derive the optimization algorithms in the quaternion field directly rather than the methods of decomposing the optimization problems into the complex domain or the real domain. Via chain rules and Lyapunov theorem, the rigorous analysis shows that the deliberately designed quaternion-valued one-layer recurrent neural network stabilizes the system dynamics while the states reach the feasible region in finite time and converges to the optimal solution of the considered constrained convex optimization problems finally. Numerical simulations verify the theoretical results.
Collapse
|
38
|
Zhao Y, He X, Huang T, Huang J, Li P. A smoothing neural network for minimization l1-lp in sparse signal reconstruction with measurement noises. Neural Netw 2020; 122:40-53. [DOI: 10.1016/j.neunet.2019.10.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 08/27/2019] [Accepted: 10/08/2019] [Indexed: 10/25/2022]
|
39
|
Jiang X, Qin S, Xue X. A penalty-like neurodynamic approach to constrained nonsmooth distributed convex optimization. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.050] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
40
|
Evolutionary Algorithms Enhanced with Quadratic Coding and Sensing Search for Global Optimization. MATHEMATICAL AND COMPUTATIONAL APPLICATIONS 2020. [DOI: 10.3390/mca25010007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Enhancing Evolutionary Algorithms (EAs) using mathematical elements significantly contribute to their development and control the randomness they are experiencing. Moreover, the automation of the primary process steps of EAs is still one of the hardest problems. Specifically, EAs still have no robust automatic termination criteria. Moreover, the highly random behavior of some evolutionary operations should be controlled, and the methods should invoke advanced learning process and elements. As follows, this research focuses on the problem of automating and controlling the search process of EAs by using sensing and mathematical mechanisms. These mechanisms can provide the search process with the needed memories and conditions to adapt to the diversification and intensification opportunities. Moreover, a new quadratic coding and quadratic search operator are invoked to increase the local search improving possibilities. The suggested quadratic search operator uses both regression and Radial Basis Function (RBF) neural network models. Two evolutionary-based methods are proposed to evaluate the performance of the suggested enhancing elements using genetic algorithms and evolution strategies. Results show that for both the regression, RBFs and quadratic techniques could help in the approximation of high-dimensional functions with the use of a few adjustable parameters for each type of function. Moreover, the automatic termination criteria could allow the search process to stop appropriately.
Collapse
|
41
|
Jia W, Qin S, Xue X. A generalized neural network for distributed nonsmooth optimization with inequality constraint. Neural Netw 2019; 119:46-56. [PMID: 31376637 DOI: 10.1016/j.neunet.2019.07.019] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Revised: 05/29/2019] [Accepted: 07/22/2019] [Indexed: 10/26/2022]
Abstract
In this paper, a generalized neural network with a novel auxiliary function is proposed to solve a distributed non-differentiable optimization over a multi-agent network. The constructed auxiliary function can ensure that the state solution of proposed neural network is bounded, and enters the inequality constraint set in finite time. Furthermore, the proposed neural network is demonstrated to reach consensus and ultimately converges to the optimal solution under several mild assumptions. Compared with the existing methods, the neural network proposed in this paper has a simple structure with a low amount of state variables, and does not depend on projection operator method for constrained distributed optimization. Finally, two numerical simulations and an application in power system are delineated to show the characteristics and practicability of the presented neural network.
Collapse
Affiliation(s)
- Wenwen Jia
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Xiaoping Xue
- Department of Mathematics, Harbin Institute of Technology, Harbin, PR China.
| |
Collapse
|
42
|
Meng Y, Yu S, Wang H, Qin J, Xie Y. Data-driven modeling based on kernel extreme learning machine for sugarcane juice clarification. Food Sci Nutr 2019; 7:1606-1614. [PMID: 31139373 PMCID: PMC6526666 DOI: 10.1002/fsn3.985] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 02/05/2019] [Accepted: 02/07/2019] [Indexed: 01/09/2023] Open
Abstract
Clarification of sugarcane juice is an important operation in the production process of sugar industry. The gravity purity and the color value of juice are the two most important evaluation indexes in the cane sugar production using the sulphitation clarification method. However, in the actual operation, the measurement of these two indexes is usually obtained by offline experimental titration, which makes it impossible to timely adjust the system indicators. A data-driven modeling based on kernel extreme learning machine is proposed to predict the gravity purity of juice and the color value of clear juice. The model parameters are optimized by particle swarm optimization. Experiments are conducted to verify the effectiveness and superiority of the modeling method. Compared with BP neural network, radial basis neural network, and support vector machine, the model has a good performance, which proves the reliability of the model.
Collapse
Affiliation(s)
- Yanmei Meng
- College of Mechanical EngineeringGuangxi UniversityNanningChina
| | - Shuangshuang Yu
- College of Mechanical EngineeringGuangxi UniversityNanningChina
| | - Hui Wang
- College of Mechanical EngineeringGuangxi UniversityNanningChina
| | - Johnny Qin
- Energy, Commonwealth Scientific and Industrial Research OrganisationPullenvaleQueenslandAustralia
| | - Yanpeng Xie
- College of Mechanical EngineeringGuangxi UniversityNanningChina
| |
Collapse
|
43
|
|
44
|
Fang X, He X, Huang J. A strategy to optimize the multi-energy system in microgrid based on neurodynamic algorithm. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2018.06.053] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
45
|
Xia Y, Wang J. Robust Regression Estimation Based on Low-Dimensional Recurrent Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5935-5946. [PMID: 29993932 DOI: 10.1109/tnnls.2018.2814824] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The robust Huber's M-estimator is widely used in signal and image processing, classification, and regression. From an optimization point of view, Huber's M-estimation problem is often formulated as a large-sized quadratic programming (QP) problem in view of its nonsmooth cost function. This paper presents a generalized regression estimator which minimizes a reduced-sized QP problem. The generalized regression estimator may be viewed as a significant generalization of several robust regression estimators including Huber's M-estimator. The performance of the generalized regression estimator is analyzed in terms of robustness and approximation accuracy. Furthermore, two low-dimensional recurrent neural networks (RNNs) are introduced for robust estimation. The two RNNs have low model complexity and enhanced computational efficiency. Finally, the experimental results of two examples and an application to image restoration are presented to substantiate superior performance of the proposed method over conventional algorithms for robust regression estimation in terms of approximation accuracy and convergence rate.
Collapse
|
46
|
Zhu L, Wang J, He X, Zhao Y. An inertial projection neural network for sparse signal reconstruction via l1−2 minimization. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.06.050] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
47
|
Le X, Chen S, Yan Z, Xi J. A Neurodynamic Approach to Distributed Optimization With Globally Coupled Constraints. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:3149-3158. [PMID: 29053459 DOI: 10.1109/tcyb.2017.2760908] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, a distributed neurodynamic approach is proposed for constrained convex optimization. The objective function is a sum of local convex subproblems, whereas the constraints of these subproblems are coupled. Each local objective function is minimized individually with the proposed neurodynamic optimization approach. Through information exchange between connected neighbors only, all nodes can reach consensus on the Lagrange multipliers of all global equality and inequality constraints, and the decision variables converge to the global optimum in a distributed manner. Simulation results of two power system cases are discussed to substantiate the effectiveness and characteristics of the proposed approach.
Collapse
|
48
|
Qiu B, Zhang Y, Yang Z. New Discrete-Time ZNN Models for Least-Squares Solution of Dynamic Linear Equation System With Time-Varying Rank-Deficient Coefficient. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5767-5776. [PMID: 29993872 DOI: 10.1109/tnnls.2018.2805810] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this brief, a new one-step-ahead numerical differentiation rule called six-instant -cube finite difference (6I CFD) formula is proposed for the first-order derivative approximation with higher precision than existing finite difference formulas (i.e., Euler and Taylor types). Subsequently, by exploiting the proposed 6I CFD formula to discretize the continuous-time Zhang neural network model, two new-type discrete-time ZNN (DTZNN) models, namely, new-type DTZNNK and DTZNNU models, are designed and generalized to compute the least-squares solution of dynamic linear equation system with time-varying rank-deficient coefficient in real time, which is quite different from the existing ZNN-related studies on solving continuous-time and discrete-time (dynamic or static) linear equation systems in the context of full-rank coefficients. Specifically, the corresponding dynamic normal equation system, of which the solution exactly corresponds to the least-squares solution of dynamic linear equation system, is elegantly introduced to solve such a rank-deficient least-squares problem efficiently and accurately. Theoretical analyses show that the maximal steady-state residual errors of the two new-type DTZNN models have an pattern, where denotes the sampling gap. Comparative numerical experimental results further substantiate the superior computational performance of the new-type DTZNN models to solve the rank-deficient least-squares problem of dynamic linear equation systems.
Collapse
|
49
|
Biswas S, Acharyya S. A Bi-Objective RNN Model to Reconstruct Gene Regulatory Network: A Modified Multi-Objective Simulated Annealing Approach. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2018; 15:2053-2059. [PMID: 29990170 DOI: 10.1109/tcbb.2017.2771360] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Gene Regulatory Network (GRN) is a virtual network in a cellular context of an organism, comprising a set of genes and their internal relationships to regulate protein production rate (gene expression level) of each other through coded proteins. Computational Reconstruction of GRN from gene expression data is a widely-applied research area. Recurrent Neural Network (RNN) is a useful modeling scheme for GRN reconstruction. In this research, the RNN formulation of GRN reconstruction having single objective function has been modified to incorporate a new objective function. An existing multi-objective meta-heuristic algorithm, called Archived Multi Objective Simulated Annealing (AMOSA), has been modified and applied to this bi-objective RNN formulation. Executing the resulting algorithm (called AMOSA-GRN) on a gene expression dataset, a collection (termed as Archive) of non-dominated GRNs has been obtained. Ensemble averaging has been applied on the archives, and obtained through a sequence of executions of AMOSA-GRN. Accuracy of GRNs in the averaged archive, with respect to gold standard GRN, varies in the range 0.875 - 1.0 (87.5 - 100 percent).
Collapse
|
50
|
|