1
|
Verdonk C, Duffaud AM, Longin A, Bertrand M, Zagnoli F, Trousselard M, Canini F. Posture analysis in predicting fall-related injuries during French Navy Special Forces selection course using machine learning: a proof-of-concept study. BMJ Mil Health 2023:e002542. [PMID: 38124202 DOI: 10.1136/military-2023-002542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/20/2023] [Indexed: 12/23/2023]
Abstract
INTRODUCTION Injuries induced by falls represent the main cause of failure in the French Navy Special Forces selection course. In the present study, we made the assumption that probing the posture might contribute to predicting the risk of fall-related injury at the individual level. METHODS Before the start of the selection course, the postural signals of 99 male soldiers were recorded using static posturography while they were instructed to maintain balance with their eyes closed. The event to be predicted was a fall-related injury during the selection course that resulted in the definitive termination of participation. Following a machine learning methodology, we designed an artificial neural network model to predict the risk of fall-related injury from the descriptors of postural signal. RESULTS The neural network model successfully predicted with 69.9% accuracy (95% CI 69.3-70.5) the occurrence of a fall-related injury event during the selection course from the selected descriptors of the posture. The area under the curve value was 0.731 (95% CI 0.725-0.738), the sensitivity was 56.8% (95% CI 55.2-58.4) and the specificity was 77.7% (95% CI 76.8-0.78.6). CONCLUSION If confirmed with a larger sample, these findings suggest that probing the posture using static posturography and machine learning-based analysis might contribute to inform risk assessment of fall-related injury during military training, and could ultimately lead to the development of novel programmes for personalised injury prevention in military population.
Collapse
Affiliation(s)
- Charles Verdonk
- French Armed Forces Biomedical Research Institute, Brétigny-sur-Orge, France
- Laureate Institute for Brain Research, Tulsa, Oklahoma, USA
- VIFASOM, Université Paris Cité, Paris, France
| | - A M Duffaud
- French Armed Forces Biomedical Research Institute, Brétigny-sur-Orge, France
| | - A Longin
- 125th Medical Unit of Lann Bihoué, Lorient, France
| | - M Bertrand
- 6th Special Medical Unit of Orléans-Bricy, Bricy, France
| | - F Zagnoli
- Department of Neurology, Clermont Tonnerre Military Hospital, Brest, France
- French Military Health Academy, Paris, France
| | - M Trousselard
- French Armed Forces Biomedical Research Institute, Brétigny-sur-Orge, France
- French Military Health Academy, Paris, France
| | - F Canini
- French Armed Forces Biomedical Research Institute, Brétigny-sur-Orge, France
- French Military Health Academy, Paris, France
| |
Collapse
|
2
|
Nguyen S, Manry MT. Balanced Gradient Training of Feed Forward Networks. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10474-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
3
|
|
4
|
Zhu C, Wang Z, Gao D. New design goal of a classifier: Global and local structural risk minimization. Knowl Based Syst 2016. [DOI: 10.1016/j.knosys.2016.02.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
5
|
Zhu C. Improved multi-kernel classification machine with Nyström approximation technique and Universum data. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.10.102] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
6
|
Matrixized Learning Machine with Feature-Clustering Interpolation. Neural Process Lett 2015. [DOI: 10.1007/s11063-015-9458-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
7
|
|
8
|
Wang Z, Zhu C, Niu Z, Gao D, Feng X. Multi-kernel classification machine with reduced complexity. Knowl Based Syst 2014. [DOI: 10.1016/j.knosys.2014.04.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
9
|
Abstract
A supervised learning neural network classifier that utilizes fuzzy sets as pattern classes is described. Each fuzzy set is an aggregate (union) of fuzzy set hyperboxes. A fuzzy set hyperbox is an n-dimensional box defined by a min point and a max point with a corresponding membership function. The min-max points are determined using the fuzzy min-max learning algorithm, an expansion-contraction process that can learn nonlinear class boundaries in a single pass through the data and provides the ability to incorporate new and refine existing classes without retraining. The use of a fuzzy set approach to pattern classification inherently provides a degree of membership information that is extremely useful in higher-level decision making. The relationship between fuzzy sets and pattern classification is described. The fuzzy min-max classifier neural network implementation is explained, the learning and recall algorithms are outlined, and several examples of operation demonstrate the strong qualities of this new neural network classifier.
Collapse
|
10
|
BROUWER ROELOFK. A DISCRETE FULLY RECURRENT NETWORK OF SIGMOID UNITS FOR ASSOCIATIVE MEMORY AND PATTERN CLASSIFICATION. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001402001836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Fully recurrent networks have proven themselves to be very useful as associative memories and as classifiers. However, they are generally based on units that have binary states. The effect of this is that data to be processed consisting of vectors in [Formula: see text] have to be converted to vectors in {0, 1}mwith m much larger than n since binary encoding based on positional notation is not feasible. This implies a large increase in number of components. This effect can be lessened by allowing more states for each unit in our network.This paper describes two effective learning algorithms for a network whose units take the dot product of the input with a weight vector, followed by a tanh transformation and a discretization transformation in the form of rounding or truncation. The units have states that are in {0, 0.1, 0.2, …, 0.9, 1} rather than in {0, 1} or {-1, 1}. The result is a much larger state space given a particular number of units and size of connection matrix. Two convergent learning algorithms for training such a network to store fixed points or attractors are proposed. The network exhibits those properties that are desirable in an associative memory such as limit cycles of 1, attraction to the closest attractor and few transitions required to reach attractors. Since memories that are stored can be used to represent prototypes of patterns the network is useful for pattern classification. A pattern to be classified would be entered and its class would be the same as the class of the prototype to which it is attracted to which it is.
Collapse
Affiliation(s)
- ROELOF K. BROUWER
- Department of Computing Science, University College of the Cariboo (UCC), Kamloops, BC Canada V2C 5N3, Canada
| |
Collapse
|
11
|
BROUWER ROELOFK. A FUZZY RECURRENT ARTIFICIAL NEURAL NETWORK (FRANN) FOR PATTERN CLASSIFICATION. INT J UNCERTAIN FUZZ 2011. [DOI: 10.1142/s021848850000037x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper proposes a recurrent neural network of fuzzy units, which may be used for approximating a hetero-associative mapping and also for pattern classification. Since classification is concerned with set membership, and objects generally belong to sets to various degrees, a fuzzy network seems a natural for doing classification. In the network proposed here each fuzzy unit defines a fuzzy set. The fuzzy unit in the network determines the degree to which the input vector to the unit lies in that fuzzy set. The fuzzy unit may be compared to a perceptron in which case the input vector is compared to the weighting vector associated with the unit by taking the dot product. The resulting membership value in case of the fuzzy unit is compared to a threshold. Training of a fuzzy unit is based on an algorithm for solving linear inequalities similar to the method used for Ho-Kashyap recording. Training of the whole network is done by training each unit separately. The training algorithm is tested by trying the algorithm out on representations of letters of the alphabet with their noisy versions. The results obtained by the simulation are very promising.
Collapse
Affiliation(s)
- ROELOF K. BROUWER
- Department of Computing Science, University College of the Cariboo, Kamloops, British Columbia, Canada
| |
Collapse
|
12
|
|
13
|
Diene O, Bhaya A. Perceptron training algorithms designed using discrete-time control Liapunov functions. Neurocomputing 2009. [DOI: 10.1016/j.neucom.2009.03.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
14
|
Diamantaras KI, Strintzis MG. Neural classifiers using one-time updating. ACTA ACUST UNITED AC 2008; 9:436-47. [PMID: 18252467 DOI: 10.1109/72.668885] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The linear threshold element (LTE), or perceptron, is a linear classifier with limited capabilities due to the problems arising when the input pattern set is linearly nonseparable. Assuming that the patterns are presented in a sequential fashion, we derive a theory for the detection of linear nonseparability as soon as it appears in the pattern set. This theory is based on the precise determination of the solution region in the weight space with the help of a special set of vectors. For this region, called the solution cone, we present a recursive computation procedure which allows immediate detection of nonseparability. The separability-violating patterns may be skipped so that, at the end, we derive a totally separable subset of the original pattern set along with its solution cone. The intriguing aspect of this algorithm is that it can be directly cast into a simple neural-network implementation. In this model the synaptic weights are committed (they are updated only once, and the only change that may happen after that is their destruction). This bears resemblance to the behavior of biological neural networks, and it is a feature unlike those of most other artificial neural techniques. Finally, by combining many such neural models we develop a learning procedure capable of separating convex classes.
Collapse
Affiliation(s)
- K I Diamantaras
- Department of Applied Informatics, University of Macedonia, 540 06 Thessaloniki, Greece
| | | |
Collapse
|
15
|
Lee DL. Pattern sequence recognition using a time-varying Hopfield network. IEEE TRANSACTIONS ON NEURAL NETWORKS 2008; 13:330-42. [PMID: 18244435 DOI: 10.1109/72.991419] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a novel continuous-time Hopfield-type network which is effective for temporal sequence recognition. The fundamental problem of recalling pattern sequences by neural networks is first reviewed. Since it is difficult to implement a desired flow vector field distribution by using conventional matrix encoding scheme, a time-varying Hopfield model (TVHM) is proposed. The weight matrix of the TVHM is constructed in such a way that its auto-correlation and cross-correlation parts are encoded from two different sets of patterns. With this mechanism, flow vectors between any two adjacent stored patterns are of the same directions. Moreover, the flow vector field distribution around a stored pattern can be modulated by the time variable. Then, theoretical results regarding the radii of attraction and the recalling dynamics of the TVHM are presented. The proposed approach is different from the existing methods because neither synchronous dynamics nor interpolated training patterns are required. A way of increasing the storage capacity of the TVHM is proposed. Finally, experimental results are presented to illustrate the validity, capacity, recall capability, and the applications of the proposed model.
Collapse
Affiliation(s)
- Donq-Liang Lee
- Dept. of Electron. Eng., Ta-Hwa Inst. of Technol., Hsin-Chu
| |
Collapse
|
16
|
Horcholle-Bossavit G, Quenet B, Foucart O. Oscillation and coding in a formal neural network considered as a guide for plausible simulations of the insect olfactory system. Biosystems 2007; 89:244-56. [PMID: 17316971 DOI: 10.1016/j.biosystems.2006.04.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2005] [Accepted: 04/18/2006] [Indexed: 11/23/2022]
Abstract
For the analysis of coding mechanisms in the insect olfactory system, a fully connected network of synchronously updated McCulloch and Pitts neurons (MC-P type) was developed [Quenet, B., Horn, D., 2003. The dynamic neural filter: a binary model of spatio-temporal coding. Neural Comput. 15 (2), 309-329]. Considering the update time as an intrinsic clock, this "Dynamic Neural Filter" (DNF), which maps regions of input space into spatio-temporal sequences of neuronal activity, is able to produce exact binary codes extracted from the synchronized activities recorded at the level of projection neurons (PN) in the locust antennal lobe (AL) in response to different odors [Wehr, M., Laurent, G., 1996. Odor encoding by temporal sequences of firing in oscillating neural assemblies. Nature 384, 162-166]. Here, in a first step, we separate the populations of PN and local inhibitory neurons (LN) and use the DNF as a guide for simulations based on biological plausible neurons (Hodgkin-Huxley: H-H type). We show that a parsimonious network of 10 H-H neurons generates action potentials whose timing represents the required codes. In a second step, we construct a new type of DNF in order to study the population dynamics when different delays are taken into account. We find synaptic matrices which lead to both the emergence of robust oscillations and spatio-temporal patterns, using a formal criterion, based on a Normalized Euclidian Distance (NED), in order to measure the use of the temporal dimension as a coding dimension by the DNF. Similarly to biological PN, the activity of excitatory neurons in the model can be both phase-locked to different cycles of oscillations which remind local field potential (LFP), and nevertheless exhibit dynamic behavior complex enough to be the basis of spatio-temporal codes.
Collapse
Affiliation(s)
- Ginette Horcholle-Bossavit
- UMR CNRS 7084, Ecole Supérieure de Physique et de Chimie Industrielle de la Ville de Paris, 10 rue Vauquelin, 75005 Paris, France.
| | | | | |
Collapse
|
17
|
Classification and feature selection by a self-organizing neural network. ACTA ACUST UNITED AC 2006. [DOI: 10.1007/bfb0098223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
18
|
|
19
|
|
20
|
Discrimination. Neural Netw 2005. [DOI: 10.1007/3-540-28847-3_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
21
|
Lee DL, Chuang TC. Designing Asymmetric Hopfield-Type Associative Memory With Higher Order Hamming Stability. ACTA ACUST UNITED AC 2005; 16:1464-76. [PMID: 16342488 DOI: 10.1109/tnn.2005.852863] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The problem of optimal asymmetric Hopfield-type associative memory (HAM) design based on perceptron-type learning algorithms is considered. It is found that most of the existing methods considered the design problem as either 1) finding optimal hyperplanes according to normal distance from the prototype vectors to the hyperplane surface or 2) obtaining weight matrix W = [w(ij)] by solving a constraint optimization problem. In this paper, we show that since the state space of the HAM consists of only bipolar patterns, i.e., V = (v1, v2, . . ., vN)T E {-1, +1}N, the basins of attraction around each prototype (training) vector should be expanded by using Hamming distance measure. For this reason, in this paper, the design problem is considered from a different point of view. Our idea is to systematically increase the size of the training set according to the desired basin of attraction around each prototype vector. We name this concept the higher order Hamming stability and show that conventional minimum-overlap algorithm can be modified to incorporate this concept. Experimental results show that the recall capability as well as the number of spurious memories are all improved by using the proposed method. Moreover, it is well known that setting all self-connections wiiVi to zero has the effect of reducing the number of spurious memories in state space. From the experimental results, we find that the basin width around each prototype vector can be enlarged by allowing nonzero diagonal elements on learning of the weight matrix W. If the magnitude of w(ii) is small for all i, then the condition w(ii) = OVi can be relaxed without seriously affecting the number of spurious memories in the state space. Therefore, the method proposed in this paper can be used to increase the basin width around each prototype vector with the cost of slightly increasing the number of spurious memories in the state space.
Collapse
Affiliation(s)
- Donq-Liang Lee
- Department of Computer Science and Information Engineering, Vanung University, Chung-Li 32056, Taiwan, ROC.
| | | |
Collapse
|
22
|
Yen CW, Young CN, Nagurka ML. A false acceptance error controlling method for hyperspherical classifiers. Neurocomputing 2004. [DOI: 10.1016/j.neucom.2003.10.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
23
|
Leski JM. An>tex<$varepsilon $>/tex<-Margin Nonlinear Classifier Based on Fuzzy If–Then Rules. ACTA ACUST UNITED AC 2004; 34:68-76. [PMID: 15369052 DOI: 10.1109/tsmcb.2002.805811] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper introduces a new classifier design methods that are based on a modification of the classical Ho-Kashyap procedure. First, it proposes a method to design a linear classifier using the absolute loss rather than the squared loss that results in a better approximation of the misclassification error and robustness of outliers. Additionally, easy control of the generalization ability is obtained by minimization of the Vapnik-Chervonenkis dimension. Next, an extension to a nonlinear classifier by an ensemble averaging technique is presented. Each classifier is represented by a fuzzy if-then rule in the Takagi-Sugeno-Kang form. Two approaches to the estimation of parameters value are used: local, where each of the if-then rule parameters are determined independently and global where all rules are obtained simultaneously. Finally, examples are given to demonstrate the validity of the introduced methods.
Collapse
Affiliation(s)
- Jacek M Leski
- Institute of Electronics, Silesian University of Technology, 44-101 Gliwice, Poland.
| |
Collapse
|
24
|
|
25
|
Quenet B, Dubois R, Sirapian S, Dreyfus G, Horn D. Modelling spatiotemporal olfactory data in two steps: from binary to Hodgkin-Huxley neurones. Biosystems 2002; 67:203-11. [PMID: 12459300 DOI: 10.1016/s0303-2647(02)00078-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Network models of synchronously updated McCulloch-Pitts neurones exhibit complex spatiotemporal patterns that are similar to activities of biological neurones in phase with a periodic local field potential, such as those observed experimentally by Wehr and Laurent (1996, Nature 384, 162-166) in the locust olfactory pathway. Modelling biological neural nets with networks of simple formal units makes the dynamics of the model analytically tractable. It is thus possible to determine the constraints that must be satisfied by its connection matrix in order to make its neurones exhibit a given sequence of activity (see, for instance, Quenet et al., 2001, Neurocomputing 38-40, 831-836). In the present paper, we address the following question: how can one construct a formal network of Hodgkin-Huxley (HH) type neurones that reproduces experimentally observed neuronal codes? A two-step strategy is suggested in the present paper: first, a simple network of binary units is designed, whose activity reproduces the binary experimental codes; second, this model is used as a guide to design a network of more realistic formal HH neurones. We show that such a strategy is indeed fruitful: it allowed us to design a model that reproduces the Wehr-Laurent olfactory codes, and to investigate the robustness of these codes to synaptic noise.
Collapse
Affiliation(s)
- Brigitte Quenet
- Laboratoire d'Electronique, Ecole Supérieure de Physique et de Chimie Industrielles de la Ville de Paris, 10 rue Vauquelin, 75005 Paris, France.
| | | | | | | | | |
Collapse
|
26
|
Brouwer RK. Growing of a Fuzzy Recurrent Artificial Neural Network (FRANN) for pattern classification. Int J Neural Syst 1999; 9:335-50. [PMID: 10586991 DOI: 10.1142/s0129065799000320] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper describes a method for growing a recurrent neural network of fuzzy threshold units for the classification of feature vectors. Fuzzy networks seem natural for performing classification, since classification is concerned with set membership and objects generally belonging to sets of various degrees. A fuzzy unit in the architecture proposed here determines the degree to which the input vector lies in the fuzzy set associated with the fuzzy unit. This is in contrast to perceptrons that determine the correlation between input vector and a weighting vector. The resulting membership value, in the case of the fuzzy unit, is compared with a threshold, which is interpreted as a membership value. Training of a fuzzy unit is based on an algorithm for linear inequalities similar to Ho-Kashyap recording. These fuzzy threshold units are fully connected in a recurrent network. The network grows as it is trained. The advantages of the network and its training method are: (1) Allowing the network to grow to the required size which is generally much smaller than the size of the network which would be obtained otherwise, implying better generalization, smaller storage requirements and fewer calculations during classification; (2) The training time is extremely short; (3) Recurrent networks such as this one are generally readily implemented in hardware; (4) Classification accuracy obtained on several standard data sets is better than that obtained by the majority of other standard methods; and (5) The use of fuzzy logic is very intuitive since class membership is generally fuzzy.
Collapse
Affiliation(s)
- R K Brouwer
- Department of Computing Science, University College of the Cariboo (UCC), Kamloops BC Canada.
| |
Collapse
|
27
|
|
28
|
|
29
|
Muselli M. On convergence properties of pocket algorithm. IEEE TRANSACTIONS ON NEURAL NETWORKS 1997; 8:623-9. [PMID: 18255665 DOI: 10.1109/72.572101] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The problem of finding optimal weights for a single threshold neuron starting from a general training set is considered. Among the variety of possible learning techniques, the pocket algorithm has a proper convergence theorem which asserts its optimality. However, the original proof ensures the asymptotic achievement of an optimal weight vector only if the inputs in the training set are integer or rational. This limitation is overcome in this paper by introducing a different approach that leads to the general result. Furthermore, a modified version of the learning method considered, called pocket algorithm with ratchet, is shown to obtain an optimal configuration within a finite number of iterations independently of the given training set.
Collapse
Affiliation(s)
- M Muselli
- Istituto per i Circuiti Elettronici, CNR, Genova
| |
Collapse
|
30
|
Liu D, Lu Z. A new synthesis approach for feedback neural networks based on the perceptron training algorithm. IEEE TRANSACTIONS ON NEURAL NETWORKS 1997; 8:1468-82. [PMID: 18255748 DOI: 10.1109/72.641469] [Citation(s) in RCA: 53] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
In this paper, a new synthesis approach is developed for associative memories based on the perceptron training algorithm. The design (synthesis) problem of feedback neural networks for associative memories is formulated as a set of linear inequalities such that the use of perceptron training is evident. The perceptron training in the synthesis algorithms is guaranteed to converge for the design of neural networks without any constraints on the connection matrix. For neural networks with constraints on the diagonal elements of the connection matrix, results concerning the properties of such networks and concerning the existence of such a network design are established. For neural networks with sparsity and/or symmetry constraints on the connection matrix, design algorithms are presented. Applications of the present synthesis approach to the design of associative memories realized by means of other feedback neural network models are studied. To demonstrate the applicability of the present results and to compare the present synthesis approach with existing design methods, specific examples are considered.
Collapse
Affiliation(s)
- D Liu
- Dept. of Electr. Eng. and Comput. Sci., Stevens Inst. of Technol., Hoboken, NJ
| | | |
Collapse
|
31
|
Abstract
In hybrid learning schemes a layer of unsupervised learning is followed by supervised learning. In this situation a connection between two unsupervised learning algorithms, principal component analysis and decorrelation, and a supervised learning algorithm, associative memory, is shown. When associative memory is preceded by principal component analysis or decorrelation it is possible to take advantage of the lack of correlation among inputs to associative memory to show that correlation matrix memory is a least squares solution to the supervised learning problem.
Collapse
Affiliation(s)
- Ronald Michaels
- Department of Engineering Science and Mechanics, The University of Tennessee, Knoxville, TN 37995-2030 USA
| |
Collapse
|
32
|
Chang P, Juang B. Discriminative training of dynamic programming based speech recognizers. ACTA ACUST UNITED AC 1993. [DOI: 10.1109/89.222873] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
33
|
|
34
|
|
35
|
Hassoun MH, Song J. Adaptive Ho-Kashyap rules for perceptron training. IEEE TRANSACTIONS ON NEURAL NETWORKS 1992; 3:51-61. [PMID: 18276405 DOI: 10.1109/72.105417] [Citation(s) in RCA: 26] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Three adaptive versions of the Ho-Kashyap perceptron training algorithm are derived based on gradient descent strategies. These adaptive Ho-Kashyap (AHK) training rules are comparable in their complexity to the LMS and perceptron training rules and are capable of adaptively forming linear discriminant surfaces that guarantee linear separability and of positioning such surfaces for maximal classification robustness. In particular, a derived version called AHK II is capable of adaptively identifying critical input vectors lying close to class boundaries in linearly separable problems. The authors extend this algorithm as AHK III, which adds the capability of fast convergence to linear discriminant surfaces which are good approximations for nonlinearly separable problems. This is achieved by a simple built-in unsupervised strategy which allows for the adaptive grading and discarding of input vectors causing nonseparability. Performance comparisons with LMS and perceptron training are presented.
Collapse
Affiliation(s)
- M H Hassoun
- Dept. of Electr. and Comput. Eng., Wayne State Univ., Detroit, MI
| | | |
Collapse
|
36
|
Abstract
We present a local perceptron-learning rule that either converges to a solution, or establishes linear nonseparability. We prove that when no solution exists, the algorithm detects this in a finite time (number of learning steps). This time is polynomial in typical cases and exponential in the worst case, when the set of patterns is nonstrictly linearly separable. The algorithm is local and has no arbitrary parameters.
Collapse
Affiliation(s)
- Dan Nabutovsky
- Department of Electronics, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Eytan Domany
- Department of Theoretical Physics, Oxford University, Oxford, OX1 3NP, United Kingdom
| |
Collapse
|
37
|
A generalized discriminant hyperplane pairs approach to pattern classification. Pattern Recognit Lett 1991. [DOI: 10.1016/0167-8655(91)90058-t] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
38
|
Fujita O. A method for designing the internal representation of neural networks and its application to network synthesis. Neural Netw 1991. [DOI: 10.1016/0893-6080(91)90061-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
39
|
Dynamic Associative Memories. ACTA ACUST UNITED AC 1991. [DOI: 10.1016/b978-0-444-88740-5.50016-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
40
|
Xu Y. Combined associative memories. OPTICS LETTERS 1990; 15:1091-1093. [PMID: 19771007 DOI: 10.1364/ol.15.001091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
One of the major problems with conventional associative memory models is that there exists a set of stable false and oscillatory states in addition to the true memories in the models. To solve this problem, a combined associative memory model is proposed. Digital simulation results show that the model is capable of picking out the false memories generated by conventional models and is more reliable in comparison with the conventional models.
Collapse
|
41
|
Telfer B, Casasent DP. Ho-Kashyap optical associative processors. APPLIED OPTICS 1990; 29:1191-1202. [PMID: 20562978 DOI: 10.1364/ao.29.001191] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
A Ho-Kashyap (H-K) associative processor (AP) is shown to have a larger storage capacity than the pseudoinverse and correlation APs and to accurately store linearly dependent key vectors. Prior APs have not demonstrated good performance on linearly dependent key vectors. The AP is attractive for optical implementation. A new robust H-K AP is proposed to improve noise performance. These results are demonstrated both theoretically and by Monte Carlo simulation. The H-K AP is also shown to outperform the pseudoinverse AP in an aircraft recognition case study. A technique is developed to indicate the least reliable output vector elements and a new AP error correcting synthesis technique is advanced.
Collapse
|
42
|
Castagliola P, Dubuisson B. Two classes linear discrimination A min-max approach. Pattern Recognit Lett 1989. [DOI: 10.1016/0167-8655(89)90030-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
43
|
|
44
|
Eichmann G, Caulfield HJ. Optical learning (inference) machines. APPLIED OPTICS 1985; 24:2051. [PMID: 18223836 DOI: 10.1364/ao.24.002051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
|
45
|
|
46
|
|
47
|
Herberts P, Almström C, Kadefors R, Lawrence PD. Hand prosthesis control via myoelectric patterns. ACTA ORTHOPAEDICA SCANDINAVICA 1973; 44:389-409. [PMID: 4771275 DOI: 10.3109/17453677308989075] [Citation(s) in RCA: 67] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
48
|
Vagnucci AH, Li CC. Pattern classification of renal electrolyte cycles in primary aldosteronism. COMPUTERS AND BIOMEDICAL RESEARCH, AN INTERNATIONAL JOURNAL 1971; 4:291-304. [PMID: 4935056 DOI: 10.1016/0010-4809(71)90033-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
49
|
Calvert TW, Young TY. Randomly Generated Nonlinear Transformations for Pattern Recognition. ACTA ACUST UNITED AC 1969. [DOI: 10.1109/tssc.1969.300218] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
50
|
Ho YC, Kashyap RL. A Class of Iterative Procedures for Linear Inequalities. ACTA ACUST UNITED AC 1966. [DOI: 10.1137/0304010] [Citation(s) in RCA: 32] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|