1
|
Sun M, Li X, Zhong G. Semi-global fixed/predefined-time RNN models with comprehensive comparisons for time-variant neural computing. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07820-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
2
|
Jeswal SK, Chakraverty S. ANN Based Solution of Uncertain Linear Systems of Equations. Neural Process Lett 2020. [DOI: 10.1007/s11063-019-10183-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
3
|
Xiao X, Zhou Y, Wang H, Yang X. A Novel CNN-Based Poisson Solver for Fluid Simulation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1454-1465. [PMID: 30281463 DOI: 10.1109/tvcg.2018.2873375] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Solving a large-scale Poisson system is computationally expensive for most of the Eulerian fluid simulation applications. We propose a novel machine learning-based approach to accelerate this process. At the heart of our approach is a deep convolutional neural network (CNN), with the capability of predicting the solution (pressure) of a Poisson system given the discretization structure and the intermediate velocities as input. Our system consists of four main components, namely, a deep neural network to solve the large linear equations, a geometric structure to describe the spatial hierarchies of the input vector, a Principal Component Analysis (PCA) process to reduce the dimension of input in training, and a novel loss function to control the incompressibility constraint. We have demonstrated the efficacy of our approach by simulating a variety of high-resolution smoke and liquid phenomena. In particular, we have shown that our approach accelerates the projection step in a conventional Eulerian fluid simulator by two orders of magnitude. In addition, we have also demonstrated the generality of our approach by producing a diversity of animations deviating from the original datasets.
Collapse
|
4
|
Hu C, Zhang Y, Kang X. General and Improved Five-Step Discrete-Time Zeroing Neural Dynamics Solving Linear Time-Varying Matrix Equation with Unknown Transpose. Neural Process Lett 2020. [DOI: 10.1007/s11063-019-10181-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
5
|
Lewandowski M, Płaczek B. An Event-Aware Cluster-Head Rotation Algorithm for Extending Lifetime of Wireless Sensor Network with Smart Nodes. SENSORS 2019; 19:s19194060. [PMID: 31547047 PMCID: PMC6806111 DOI: 10.3390/s19194060] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 09/10/2019] [Accepted: 09/18/2019] [Indexed: 11/16/2022]
Abstract
Smart sensor nodes can process data collected from sensors, make decisions, and recognize relevant events based on the sensed information before sharing it with other nodes. In wireless sensor networks, the smart sensor nodes are usually grouped in clusters for effective cooperation. One sensor node in each cluster must act as a cluster head. The cluster head depletes its energy resources faster than the other nodes. Thus, the cluster-head role must be periodically reassigned (rotated) to different sensor nodes to achieve a long lifetime of wireless sensor network. This paper introduces a method for extending the lifetime of the wireless sensor networks with smart nodes. The proposed method combines a new algorithm for rotating the cluster-head role among sensor nodes with suppression of unnecessary data transmissions. It enables effective control of the cluster-head rotation based on expected energy consumption of sensor nodes. The energy consumption is estimated using a lightweight model, which takes into account transmission probabilities. This method was implemented in a prototype of wireless sensor network. During experimental evaluation of the new method, detailed measurements of lifetime and energy consumption were conducted for a real wireless sensor network. Results of these realistic experiments have revealed that the lifetime of the sensor network is extended when using the proposed method in comparison with state-of-the-art cluster-head rotation algorithms.
Collapse
Affiliation(s)
- Marcin Lewandowski
- Institute of Computer Science, University of Silesia, 41-200 Sosnowiec, Poland.
| | - Bartłomiej Płaczek
- Institute of Computer Science, University of Silesia, 41-200 Sosnowiec, Poland.
| |
Collapse
|
6
|
Sun Z, Pedretti G, Ambrosi E, Bricalli A, Wang W, Ielmini D. Solving matrix equations in one step with cross-point resistive arrays. Proc Natl Acad Sci U S A 2019; 116:4123-4128. [PMID: 30782810 PMCID: PMC6410822 DOI: 10.1073/pnas.1815682116] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Conventional digital computers can execute advanced operations by a sequence of elementary Boolean functions of 2 or more bits. As a result, complicated tasks such as solving a linear system or solving a differential equation require a large number of computing steps and an extensive use of memory units to store individual bits. To accelerate the execution of such advanced tasks, in-memory computing with resistive memories provides a promising avenue, thanks to analog data storage and physical computation in the memory. Here, we show that a cross-point array of resistive memory devices can directly solve a system of linear equations, or find the matrix eigenvectors. These operations are completed in just one single step, thanks to the physical computing with Ohm's and Kirchhoff's laws, and thanks to the negative feedback connection in the cross-point circuit. Algebraic problems are demonstrated in hardware and applied to classical computing tasks, such as ranking webpages and solving the Schrödinger equation in one step.
Collapse
Affiliation(s)
- Zhong Sun
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Giacomo Pedretti
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Elia Ambrosi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Alessandro Bricalli
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Wei Wang
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| |
Collapse
|
7
|
Liu T, Huang J, Liu T, Huang J. A Discrete-Time Recurrent Neural Network for Solving Rank-Deficient Matrix Equations With an Application to Output Regulation of Linear Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:2271-2277. [PMID: 28436900 DOI: 10.1109/tnnls.2017.2690663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper presents a discrete-time recurrent neural network approach to solving systems of linear equations with two features. First, the system of linear equations may not have a unique solution. Second, the system matrix is not known precisely, but a sequence of matrices that converges to the unknown system matrix exponentially is known. The problem is motivated from solving the output regulation problem for linear systems. Thus, an application of our main result leads to an online solution to the output regulation problem for linear systems.
Collapse
|
8
|
Approximate Solutions of Initial Value Problems for Ordinary Differential Equations Using Radial Basis Function Networks. Neural Process Lett 2017. [DOI: 10.1007/s11063-017-9761-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
9
|
Clady X, Maro JM, Barré S, Benosman RB. A Motion-Based Feature for Event-Based Pattern Recognition. Front Neurosci 2017; 10:594. [PMID: 28101001 PMCID: PMC5209354 DOI: 10.3389/fnins.2016.00594] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Accepted: 12/13/2016] [Indexed: 11/13/2022] Open
Abstract
This paper introduces an event-based luminance-free feature from the output of asynchronous event-based neuromorphic retinas. The feature consists in mapping the distribution of the optical flow along the contours of the moving objects in the visual scene into a matrix. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating "spiking" events that encode relative changes in pixels' illumination at high temporal resolutions. The optical flow is computed at each event, and is integrated locally or globally in a speed and direction coordinate frame based grid, using speed-tuned temporal kernels. The latter ensures that the resulting feature equitably represents the distribution of the normal motion along the current moving edges, whatever their respective dynamics. The usefulness and the generality of the proposed feature are demonstrated in pattern recognition applications: local corner detection and global gesture recognition.
Collapse
Affiliation(s)
- Xavier Clady
- Centre National de la Recherche Scientifique, Institut National de la Santé Et de la Recherche Médicale, Institut de la Vision, Sorbonne Universités, UPMC University Paris 06 Paris, France
| | - Jean-Matthieu Maro
- Centre National de la Recherche Scientifique, Institut National de la Santé Et de la Recherche Médicale, Institut de la Vision, Sorbonne Universités, UPMC University Paris 06 Paris, France
| | - Sébastien Barré
- Centre National de la Recherche Scientifique, Institut National de la Santé Et de la Recherche Médicale, Institut de la Vision, Sorbonne Universités, UPMC University Paris 06 Paris, France
| | - Ryad B Benosman
- Centre National de la Recherche Scientifique, Institut National de la Santé Et de la Recherche Médicale, Institut de la Vision, Sorbonne Universités, UPMC University Paris 06 Paris, France
| |
Collapse
|
10
|
Di Marco M, Forti M, Nistri P, Pancioni L. Discontinuous Neural Networks for Finite-Time Solution of Time-Dependent Linear Equations. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:2509-2520. [PMID: 26441464 DOI: 10.1109/tcyb.2015.2479118] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper considers a class of nonsmooth neural networks with discontinuous hard-limiter (signum) neuron activations for solving time-dependent (TD) systems of algebraic linear equations (ALEs). The networks are defined by the subdifferential with respect to the state variables of an energy function given by the L 1 norm of the error between the state and the TD-ALE solution. It is shown that when the penalty parameter exceeds a quantitatively estimated threshold the networks are able to reach in finite time, and exactly track thereafter, the target solution of the TD-ALE. Furthermore, this paper discusses the tightness of the estimated threshold and also points out key differences in the role played by this threshold with respect to networks for solving time-invariant ALEs. It is also shown that these convergence results are robust with respect to small perturbations of the neuron interconnection matrices. The dynamics of the proposed networks are rigorously studied by using tools from nonsmooth analysis, the concept of subdifferential of convex functions, and that of solutions in the sense of Filippov of dynamical systems with discontinuous nonlinearities.
Collapse
|
11
|
Neural network-based discrete-time Z-type model of high accuracy in noisy environments for solving dynamic system of linear equations. Neural Comput Appl 2016. [DOI: 10.1007/s00521-016-2640-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
12
|
Stanimirović PS, Zivković IS, Wei Y. Recurrent Neural Network for Computing the Drazin Inverse. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2830-2843. [PMID: 25706892 DOI: 10.1109/tnnls.2015.2397551] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.
Collapse
|
13
|
Xia Y, Chen T, Shan J. A Novel Iterative Method for Computing Generalized Inverse. Neural Comput 2014; 26:449-65. [PMID: 24206382 DOI: 10.1162/neco_a_00549] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In this letter, we propose a novel iterative method for computing generalized inverse, based on a novel KKT formulation. The proposed iterative algorithm requires making four matrix and vector multiplications at each iteration and thus has low computational complexity. The proposed method is proved to be globally convergent without any condition. Furthermore, for fast computing generalized inverse, we present an acceleration scheme based on the proposed iterative method. The global convergence of the proposed acceleration algorithm is also proved. Finally, the effectiveness of the proposed iterative algorithm is evaluated numerically.
Collapse
Affiliation(s)
- Youshen Xia
- College of Mathematics and Computer Science, Fuzhou University, Fuzhou, Fujian 350002, China
| | - Tianping Chen
- Department of Mathematics, Fudan University, Shanghai 200433, China
| | - Jinjun Shan
- Department of Earth and Space Science and Engineering, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
14
|
Performance analysis of gradient neural network exploited for online time-varying quadratic minimization and equality-constrained quadratic programming. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2011.02.007] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
15
|
da Fonseca Neto J, Abreu I, da Silva F. Neural–Genetic Synthesis for State-Space Controllers Based on Linear Quadratic Regulator Design for Eigenstructure Assignment. ACTA ACUST UNITED AC 2010; 40:266-85. [DOI: 10.1109/tsmcb.2009.2013722] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
16
|
Fourkal E, Fan J, Veltchev I. Absolute dose reconstruction in proton therapy using PET imaging modality: feasibility study. Phys Med Biol 2009; 54:N217-28. [PMID: 19436106 DOI: 10.1088/0031-9155/54/11/n02] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
17
|
Abstract
It is well known that least absolute deviation (LAD) criterion or L(1)-norm used for estimation of parameters is characterized by robustness, i.e., the estimated parameters are totally resistant (insensitive) to large changes in the sampled data. This is an extremely useful feature, especially, when the sampled data are known to be contaminated by occasionally occurring outliers or by spiky noise. In our previous works, we have proposed the least absolute deviation neural network (LADNN) to solve unconstrained LAD problems. The theoretical proofs and numerical simulations have shown that the LADNN is Lyapunov-stable and it can globally converge to the exact solution to a given unconstrained LAD problem. We have also demonstrated its excellent application value in time-delay estimation. More generally, a practical LAD application problem may contain some linear constraints, such as a set of equalities and/or inequalities, which is called constrained LAD problem, whereas the unconstrained LAD can be considered as a special form of the constrained LAD. In this paper, we present a new neural network called constrained least absolute deviation neural network (CLADNN) to solve general constrained LAD problems. Theoretical proofs and numerical simulations demonstrate that the proposed CLADNN is Lyapunov stable and globally converges to the exact solution to a given constrained LAD problem, independent of initial values. The numerical simulations have also illustrated that the proposed CLADNN can be used to robustly estimate parameters for nonlinear curve fitting, which is extensively used in signal and image processing.
Collapse
Affiliation(s)
- Z Wang
- Division of Child Psychiatry, Columbia College of Physicians and Surgeons, New York, NY 10032, USA.
| | | |
Collapse
|
18
|
He B, Yang H. A neural network model for monotone linear asymmetric variational inequalities. IEEE TRANSACTIONS ON NEURAL NETWORKS 2008; 11:3-16. [PMID: 18249734 DOI: 10.1109/72.822505] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Linear variational inequality is a uniform approach for some important problems in optimization and equilibrium problems. In this paper, we give a neural-network model for solving asymmetric linear variational inequalities. The model is based on a simple projection and contraction method. Computer simulation is performed for linear programming (LP) and linear complementarity problems (LCP). The test results for LP problem demonstrate that our model converges significantly faster than the three existing neural-network models examined in a recent comparative study paper.
Collapse
Affiliation(s)
- B He
- Department of Mathematics, Nanjing University, Nanjing, 210093, China
| | | |
Collapse
|
19
|
Chuah TC, Sharif BS, Hinton OR. Robust CDMA multiuser detection using a neural-network approach. IEEE TRANSACTIONS ON NEURAL NETWORKS 2008; 13:1532-9. [PMID: 18244548 DOI: 10.1109/tnn.2002.804310] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Abstract-Recently, a robust version of the linear decorrelating detector (LDD) based on the Huber's M-estimation technique has been proposed. In this paper, we first demonstrate the use of a three-layer recurrent neural network (RNN) to implement the LDD without requiring matrix inversion. The key idea is based on minimizing an appropriate computational energy function iteratively. Second, it will be shown that the M-decorrelating detector (MDD) can be implemented by simply incorporating sigmoidal neurons in the first layer of the RNN. A proof of the redundancy of the matrix inversion process is provided and the computational saving in realistic network is highlighted. Third, we illustrate how further performance gain could be achieved for the subspace-based blind MDD by using robust estimates of the signal subspace components in the initial stage. The impulsive noise is modeled using non-Gaussian alpha-stable distributions, which do not include a Gaussian component but facilitate the use of the recently proposed geometric signal-to-noise ratio (G-SNR). The characteristics and performance of the proposed neural-network detectors are investigated by computer simulation.
Collapse
Affiliation(s)
- Teong Chee Chuah
- Dept. of Electr. and Electron. Eng., Newcastle upon Tyne Univ., UK
| | | | | |
Collapse
|
20
|
Zhang Y. A set of nonlinear equations and inequalities arising in robotics and its online solution via a primal neural network. Neurocomputing 2006. [DOI: 10.1016/j.neucom.2005.11.006] [Citation(s) in RCA: 62] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
Zhang Y, Ge SS. Design and analysis of a general recurrent neural network model for time-varying matrix inversion. ACTA ACUST UNITED AC 2006; 16:1477-90. [PMID: 16342489 DOI: 10.1109/tnn.2005.857946] [Citation(s) in RCA: 140] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Following the idea of using first-order time derivatives, this paper presents a general recurrent neural network (RNN) model for online inversion of time-varying matrices. Different kinds of activation functions are investigated to guarantee the global exponential convergence of the neural model to the exact inverse of a given time-varying matrix. The robustness of the proposed neural model is also studied with respect to different activation functions and various implementation errors. Simulation results, including the application to kinematic control of redundant manipulators, substantiate the theoretical analysis and demonstrate the efficacy of the neural model on time-varying matrix inversion, especially when using a power-sigmoid activation function.
Collapse
Affiliation(s)
- Yunong Zhang
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117576, Singapore.
| | | |
Collapse
|
22
|
Mestari M. An analog neural network implementation in fixed time of adjustable-order statistic filters and applications. ACTA ACUST UNITED AC 2004; 15:766-85. [PMID: 15384563 DOI: 10.1109/tnn.2003.820656] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
In this paper, we show a neural network implementation in fixed time of adjustable order statistic filters, including sorting, and adaptive-order statistic filters. All these networks accept an array of N numbers Xi = S(Xi)M(Xi)2E(Xi) as input (where S(Xi) is the sign of Xi, M(Xi) is the mantissa normalized to m digits, and Ex is the exponent) and employ two kinds of neurons, the linear and the threshold-logic neurons, with only integer weights (most of the weights being just +1 or -1) and integer threshold. Therefore, this will greatly facilitate the actual hardware implementation of the proposed neural networks using currently available very large scale integration technology. An application of using minimum filter in implementing a special neural network model neural network classifier (NNC) is given. With a classification problem of l classes C1, C2,.. ., C1, NNC classifies in fixed time an unknown vector to one class using a minimum-distance classification technique.
Collapse
|
23
|
Yunong Zhang, Danchi Jiang, Jun Wang. A recurrent neural network for solving Sylvester equation with time-varying coefficients. ACTA ACUST UNITED AC 2002; 13:1053-63. [DOI: 10.1109/tnn.2002.1031938] [Citation(s) in RCA: 370] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
24
|
Yunong Zhang, Jun Wang. Global exponential stability of recurrent neural networks for synthesizing linear feedback control systems via pole assignment. ACTA ACUST UNITED AC 2002; 13:633-44. [DOI: 10.1109/tnn.2002.1000129] [Citation(s) in RCA: 93] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
25
|
Neville RS, Eldridge S. Transformations of sigma-pi nets: obtaining reflected functions by reflecting weight matrices. Neural Netw 2002; 15:375-93. [PMID: 12125892 DOI: 10.1016/s0893-6080(02)00023-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
This paper presents a methodology that reflected functions by reflecting the weight matrices of an artificial neural network. One of the major problems with the connectionist approach is that trained neural networks can only associate fixed sets of input-output mappings. We provide a methodology which allows the post-trained net to associate different input-output mappings. The different mappings are reflected in a horizontal axis, reflected in a vertical axis and scaling of the initial mapping. The methodology does not train the net on the different mappings but it transforms the weight matrix of the neural network. This paper describes a novel way of utilising sigma-pi neural networks. Our new methodology manipulates sigma-pi unit's weight matrices which transform the unit's output. The weights are cast in a matrix formulation, and then transformations can be performed on the weight matrix of the sigma-pi net. To test the new methodology, the following three steps were carried out on a neural network: (1) the network was trained to perform a mapping function, f; (2) the weights of the network were transformed; and (3) the network was tested to evaluate whether it performs the reflection in the vertical axis,f(ref-vert)(x) = a - f(x). This reflects the function in one dimension. A reflection transformation was used to manipulate the network's weight matrices to obtain a reflection in the vertical axis. Note that the network was not trained to perform the reflection in the vertical axis. The transformation of the weight matrix transformed the function the output performs. This article explains the theory which enables us to perform transformations of sigma-pi networks and obtain reflections of the output by reflecting the weight matrices. These transforms empower the network to perform related mapping tasks once one mapping task has been learnt. This article explains how each transformation is performed and it considers whether a set of 'standard' transformations can indeed be derived.
Collapse
Affiliation(s)
- R S Neville
- Department of Computation, UMIST, Manchester, UK.
| | | |
Collapse
|
26
|
Jun Wang, Qingni Hu, Danchi Jiang. A Lagrangian network for kinematic control of redundant robot manipulators. ACTA ACUST UNITED AC 1999; 10:1123-32. [DOI: 10.1109/72.788651] [Citation(s) in RCA: 83] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
27
|
Wang J, Wu G. A multilayer recurrent neural network for solving continuous-time algebraic Riccati equations. Neural Netw 1998; 11:939-950. [PMID: 12662795 DOI: 10.1016/s0893-6080(98)00034-3] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A multilayer recurrent neural network is proposed for solving continuous-time algebraic matrix Riccati equations in real time. The proposed recurrent neural network consists of four bidirectionally connected layers. Each layer consists of an array of neurons. The proposed recurrent neural network is shown to be capable of solving algebraic Riccati equations and synthesizing linear-quadratic control systems in real time. Analytical results on stability of the recurrent neural network and solvability of algebraic Riccati equations by use of the recurrent neural network are discussed. The operating characteristics of the recurrent neural network are also demonstrated through three illustrative examples.
Collapse
Affiliation(s)
- Jun Wang
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
| | | |
Collapse
|
28
|
Constantinides AG, Baykal B. A Neural Approach to the Underdetermined-Order Recursive Least-Squares Adaptive Filtering. Neural Netw 1997; 10:1523-1531. [PMID: 12662491 DOI: 10.1016/s0893-6080(97)00045-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The incorporation of the neural architectures in adaptive filtering applications has been addressed in detail. In particular, the Underdetermined-Order Recursive Least-Squares (URLS) algorithm, which lies between the well-known Normalized Least Mean Square and Recursive Least Squares algorithms, is reformulated via a neural architecture. The response of the neural network is seen to be identical to that of the algorithmic approach. Together with the advantage of simple circuit realization, this neural network avoids the drawbacks of digital computation such as error propagation and matrix inversion, which is ill-conditioned in most cases. It is numerically attractive because the quadratic optimization problem performs an implicit matrix inversion. Also, the neural network offers the flexibility of easy alteration of the prediction order of the URLS algorithm which may be crucial in some applications. It is rather difficult to achieve in the digital implementation, as one would have to use Levinson recursions. The neural network can easily be integrated into a digital system through appropriate digital-to-analog and analog-to-digital converters.
Collapse
|
29
|
Myung H, Kim JH. Time-varying two-phase optimization and its application to neural-network learning. IEEE TRANSACTIONS ON NEURAL NETWORKS 1997; 8:1293-300. [PMID: 18255731 DOI: 10.1109/72.641452] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this paper, a time-varying two-phase (TVTP) optimization neural network is proposed based on the two-phase neural network and the time-varying programming neural network. The proposed TVTP algorithm gives exact feasible solutions with a finite penalty parameter when the problem is a constrained time-varying optimization. It can be applied to system identification and control where it has some constraints on weights in the learning of the neural network. To demonstrate its effectiveness and applicability, the proposed algorithm is applied to the learning of a neo-fuzzy neuron model.
Collapse
Affiliation(s)
- H Myung
- Dept. of Electr. Eng., Korea Adv. Inst. of Sci. and Technol., Seoul
| | | |
Collapse
|
30
|
Abstract
A hybrid of evolutionary programming (EP) and a deterministic optimization procedure is applied to a series of non-linear and quadratic optimization problems. The hybrid scheme is compared with other existing schemes such as EP alone, two-phase (TP) optimization, and EP with a non-stationary penalty function (NS-EP). The results indicate that the hybrid method can outperform the other methods when addressing heavily constrained optimization problems in terms of computational efficiency and solution accuracy.
Collapse
Affiliation(s)
- H Myung
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST) Kusung-dong, Taejon-shi, Republic of Korea
| | | |
Collapse
|
31
|
Cichocki A, Unbehauen R, Lendl M, Weinzierl K. Neural networks for linear inverse problems with incomplete data especially in applications to signal and image reconstruction. Neurocomputing 1995. [DOI: 10.1016/0925-2312(94)e0063-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
32
|
Zak S, Upatising V, Hui S. Solving linear programming problems with neural networks: a comparative study. ACTA ACUST UNITED AC 1995; 6:94-104. [DOI: 10.1109/72.363446] [Citation(s) in RCA: 88] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
33
|
Perfetti R. Optimization neural network for solving flow problems. IEEE TRANSACTIONS ON NEURAL NETWORKS 1995; 6:1287-1291. [PMID: 18263420 DOI: 10.1109/72.410376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, it is easily programmable, and it is suited for analog very large scale integration (VLSI) realization. The functionality of the proposed network is illustrated by a simulation example concerning the maximal flow problem.
Collapse
|
34
|
Cichocki A, Unbehauen R. Simplified neural networks for solving linear least squares and total least squares problems in real time. ACTA ACUST UNITED AC 1994; 5:910-23. [DOI: 10.1109/72.329687] [Citation(s) in RCA: 25] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
35
|
|
36
|
Wang J, Wu G. Recurrent neural networks for LU decomposition and Cholesky factorization. ACTA ACUST UNITED AC 1993. [DOI: 10.1016/0895-7177(93)90121-e] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
37
|
Shi P, Ward RK. OSNet: a neural network implementation of order statistic filters. IEEE TRANSACTIONS ON NEURAL NETWORKS 1993; 4:234-41. [PMID: 18267723 DOI: 10.1109/72.207611] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
A dedicated neural network model called OSNet which finds the kth largest element in an array of real numbers is proposed. Its overall processing time is constant irrespective of the number of elements in the array and is four times the processing time of a single neuron. Networks of this kind may be used as building blocks for hardware implementation of order statistic filters. Examples of using OSNet for implementing various order statistic filters and for sorting are shown.
Collapse
Affiliation(s)
- P Shi
- Dept. of Electr. Eng., British Columbia Univ., Vancouver, BC
| | | |
Collapse
|
38
|
Neural networks for solving systems of linear equations. II. Minimax and least absolute value problems. ACTA ACUST UNITED AC 1992. [DOI: 10.1109/82.193316] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|