1
|
Gao X, Liao LZ. Novel Continuous- and Discrete-Time Neural Networks for Solving Quadratic Minimax Problems With Linear Equality Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:9814-9828. [PMID: 37022226 DOI: 10.1109/tnnls.2023.3236695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article presents two novel continuous- and discrete-time neural networks (NNs) for solving quadratic minimax problems with linear equality constraints. These two NNs are established based on the conditions of the saddle point of the underlying function. For the two NNs, a proper Lyapunov function is constructed so that they are stable in the sense of Lyapunov, and will converge to some saddle point(s) for any starting point under some mild conditions. Compared with the existing NNs for solving quadratic minimax problems, the proposed NNs require weaker stability conditions. The validity and transient behavior of the proposed models are illustrated by some simulation results.
Collapse
|
2
|
Xia Y, Wang J, Lu Z, Huang L. Two Recurrent Neural Networks With Reduced Model Complexity for Constrained l₁-Norm Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:6173-6185. [PMID: 34986103 DOI: 10.1109/tnnls.2021.3133836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Because of the robustness and sparsity performance of least absolute deviation (LAD or l1 ) optimization, developing effective solution methods becomes an important topic. Recurrent neural networks (RNNs) are reported to be capable of effectively solving constrained l1 -norm optimization problems, but their convergence speed is limited. To accelerate the convergence, this article introduces two RNNs, in form of continuous- and discrete-time systems, for solving l1 -norm optimization problems with linear equality and inequality constraints. The RNNs are theoretically proven to be globally convergent to optimal solutions without any condition. With reduced model complexity, the two RNNs can significantly expedite constrained l1 -norm optimization. Numerical simulation results show that the two RNNs spend much less computational time than related RNNs and numerical optimization algorithms for linearly constrained l1 -norm optimization.
Collapse
|
3
|
Chen HC, Yang HC, Chen CC, Harrevelt S, Chao YC, Lin JM, Yu WH, Chang HC, Chang CK, Hwang FN. Improved Image Quality for Static BLADE Magnetic Resonance Imaging Using the Total-Variation Regularized Least Absolute Deviation Solver. Tomography 2021; 7:555-572. [PMID: 34698286 PMCID: PMC8544655 DOI: 10.3390/tomography7040048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 09/26/2021] [Accepted: 09/27/2021] [Indexed: 11/16/2022] Open
Abstract
In order to improve the image quality of BLADE magnetic resonance imaging (MRI) using the index tensor solvers and to evaluate MRI image quality in a clinical setting, we implemented BLADE MRI reconstructions using two tensor solvers (the least-squares solver and the L1 total-variation regularized least absolute deviation (L1TV-LAD) solver) on a graphics processing unit (GPU). The BLADE raw data were prospectively acquired and presented in random order before being assessed by two independent radiologists. Evaluation scores were examined for consistency and then by repeated measures analysis of variance (ANOVA) to identify the superior algorithm. The simulation showed the structural similarity index (SSIM) of various tensor solvers ranged between 0.995 and 0.999. Inter-reader reliability was high (Intraclass correlation coefficient (ICC) = 0.845, 95% confidence interval: 0.817, 0.87). The image score of L1TV-LAD was significantly higher than that of vendor-provided image and the least-squares method. The image score of the least-squares method was significantly lower than that of the vendor-provided image. No significance was identified in L1TV-LAD with a regularization strength of λ= 0.4–1.0. The L1TV-LAD with a regularization strength of λ= 0.4–0.7 was found consistently better than least-squares and vendor-provided reconstruction in BLADE MRI with a SENSitivity Encoding (SENSE) factor of 2. This warrants further development of the integrated computing system with the scanner.
Collapse
Affiliation(s)
- Hsin-Chia Chen
- Department of Diagnostic Medical Imaging, Madou Sin-Lau Hospital, Tainan 721, Taiwan; (H.-C.C.); (H.-C.Y.); (Y.-C.C.)
| | - Haw-Chiao Yang
- Department of Diagnostic Medical Imaging, Madou Sin-Lau Hospital, Tainan 721, Taiwan; (H.-C.C.); (H.-C.Y.); (Y.-C.C.)
| | - Chih-Ching Chen
- Department of Finance, Chung Yuan Christian University, Chung Li 320, Taiwan;
| | - Seb Harrevelt
- Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Yu-Chieh Chao
- Department of Diagnostic Medical Imaging, Madou Sin-Lau Hospital, Tainan 721, Taiwan; (H.-C.C.); (H.-C.Y.); (Y.-C.C.)
| | - Jyh-Miin Lin
- Development and Alumni Relations, University of Cambridge, Cambridge CB5 8AB, UK
- Correspondence:
| | - Wei-Hsuan Yu
- Department of Mathematics, National Central University, Taoyuan City 320, Taiwan; (W.-H.Y.); (F.-N.H.)
| | - Hing-Chiu Chang
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong;
| | - Chin-Kuo Chang
- Global Health Program, College of Public Health, National Taiwan University, Taipei City 100, Taiwan;
- Institute of Epidemiology and Preventive Medicine, College of Public Health, National Taiwan University, Taipei City 100, Taiwan
- Institute of Psychiatry, Psychology, and Neuroscience, King’s College London, London SE5 8AF, UK
| | - Feng-Nan Hwang
- Department of Mathematics, National Central University, Taoyuan City 320, Taiwan; (W.-H.Y.); (F.-N.H.)
| |
Collapse
|
4
|
Li Y, Gao X. Alternative continuous- and discrete-time neural networks for image restoration. NETWORK (BRISTOL, ENGLAND) 2019; 30:107-124. [PMID: 31662021 DOI: 10.1080/0954898x.2019.1677955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2018] [Revised: 08/02/2019] [Accepted: 10/03/2019] [Indexed: 06/10/2023]
Abstract
This paper presents alternative continuous- and discrete-time neural networks for image restoration in real time by introducing new vectors and transforming its optimization conditions into a system of double projection equations. The proposed neural networks are shown to be stable in the sense of Lyapunov and convergent for any starting point. Compared with the existing neural networks for image restoration, the proposed models have the least neurons, a one-layer structure and the faster convergence, and is suitable to parallel implementation. The validity and transient behaviour of the proposed neural network is demonstrated by numerical examples.
Collapse
Affiliation(s)
- Yawei Li
- School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, Shaanxi, P. R. China
| | - Xingbao Gao
- School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, Shaanxi, P. R. China
| |
Collapse
|
5
|
Li C, Gao X. One-layer neural network for solving least absolute deviation problem with box and equality constraints. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.11.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
6
|
Simplified neural network for generalized least absolute deviation. Neural Comput Appl 2018. [DOI: 10.1007/s00521-017-3060-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
7
|
Gao X, Li C. A new neural network for convex quadratic minimax problems with box and equality constraints. Comput Chem Eng 2017. [DOI: 10.1016/j.compchemeng.2017.03.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
8
|
Di Marco M, Forti M, Nistri P, Pancioni L. Discontinuous Neural Networks for Finite-Time Solution of Time-Dependent Linear Equations. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:2509-2520. [PMID: 26441464 DOI: 10.1109/tcyb.2015.2479118] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper considers a class of nonsmooth neural networks with discontinuous hard-limiter (signum) neuron activations for solving time-dependent (TD) systems of algebraic linear equations (ALEs). The networks are defined by the subdifferential with respect to the state variables of an energy function given by the L 1 norm of the error between the state and the TD-ALE solution. It is shown that when the penalty parameter exceeds a quantitatively estimated threshold the networks are able to reach in finite time, and exactly track thereafter, the target solution of the TD-ALE. Furthermore, this paper discusses the tightness of the estimated threshold and also points out key differences in the role played by this threshold with respect to networks for solving time-invariant ALEs. It is also shown that these convergence results are robust with respect to small perturbations of the neuron interconnection matrices. The dynamics of the proposed networks are rigorously studied by using tools from nonsmooth analysis, the concept of subdifferential of convex functions, and that of solutions in the sense of Filippov of dynamical systems with discontinuous nonlinearities.
Collapse
|
9
|
|
10
|
Vazler I, Sabo K, Scitovski R. Weighted Median of the Data in Solving Least Absolute Deviations Problems. COMMUN STAT-THEOR M 2012. [DOI: 10.1080/03610926.2010.539750] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
11
|
Hametner C, Jakubek S. Nonlinear identification with local model networks using GTLS techniques and equality constraints. ACTA ACUST UNITED AC 2011; 22:1406-18. [PMID: 21788188 DOI: 10.1109/tnn.2011.2159309] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Local model networks approximate a nonlinear system through multiple local models fitted within a partition space. The main advantage of this approach is that the identification of complex nonlinear processes is alleviated by the integration of structured knowledge about the process. This paper extends these concepts by the integration of quantitative process knowledge into the identification procedure. Quantitative knowledge describes explicit dependences between inputs and outputs and is integrated in the parameter estimation process by means of equality constraints. For this purpose, a constrained generalized total least squares algorithm for local parameter estimation is presented. Furthermore, the problem of proper integration of constraints in the partitioning process is treated where an expectation-maximization procedure is combined with constrained parameter estimation. The benefits and the applicability of the proposed concepts are demonstrated by means of two illustrative examples and a practical application using real measurement data.
Collapse
Affiliation(s)
- Christoph Hametner
- Institute of Mechanics and Mechatronics, Division of Control and Process Automation, Vienna University of Technology, Vienna, Austria.
| | | |
Collapse
|
12
|
Finite-Time Convergent Recurrent Neural Network With a Hard-Limiting Activation Function for Constrained Optimization With Piecewise-Linear Objective Functions. ACTA ACUST UNITED AC 2011; 22:601-13. [DOI: 10.1109/tnn.2011.2104979] [Citation(s) in RCA: 82] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
13
|
Hu X, Sun C, Zhang B. Design of Recurrent Neural Networks for Solving Constrained Least Absolute Deviation Problems. ACTA ACUST UNITED AC 2010; 21:1073-86. [DOI: 10.1109/tnn.2010.2048123] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Xiaolin Hu
- State Key Laboratory of Intelligent Technology and Systems, TNList, and the Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China.
| | | | | |
Collapse
|