1201
|
van Ooyen A, Nienhuis B. Pattern recognition in the neocognitron is improved by neuronal adaptation. BIOLOGICAL CYBERNETICS 1993; 70:47-53. [PMID: 8312398 DOI: 10.1007/bf00202565] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
We demonstrate that equipping the neurons of Fukushima's neocognitron with the phenomenon that a neuron decreases its activity when repeatedly stimulated (adaptation) markedly improves the pattern discriminatory power of the network. By means of adaptation, circuits for extracting discriminating features develop preferentially. In the original neocognitron, in contrast, features shared by different patterns are preferentially learned, as connections required for extracting them are more frequently reinforced.
Collapse
Affiliation(s)
- A van Ooyen
- Netherlands Institute for Brain Research, Amsterdam
| | | |
Collapse
|
1202
|
|
1203
|
Chang HJ, Ghosh J. Pattern association and retrieval in a continuous neural system. BIOLOGICAL CYBERNETICS 1993; 69:77-86. [PMID: 8334192 DOI: 10.1007/bf00201410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
This paper studies the behavior of a large body of neurons in the continuum limit. A mathematical characterization of such systems is obtained by approximating the inverse input-output nonlinearity of a cell (or an assembly of cells) by three adjustable linearized sections. The associative spatio-temporal patterns for storage in the neural system are obtained by using approaches analogous to solving space-time field equations in physics. A noise-reducing equation is also derived from this neural model. In addition, conditions that make a noisy pattern retrievable are identified. Based on these analyses, a visual cortex model is proposed and an exact characterization of the patterns that are storable in this cortex is obtained. Furthermore, we show that this model achieves pattern association that is invariant to scaling, translation, rotation and mirror-reflection.
Collapse
Affiliation(s)
- H J Chang
- Department of Electrical and Computer Engineering, University of Texas, Austin 78712
| | | |
Collapse
|
1204
|
Fukushima K, Imagawa T. Recognition and segmentation of connected characters with selective attention. Neural Netw 1993. [DOI: 10.1016/s0893-6080(05)80071-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
1205
|
|
1206
|
|
1207
|
Mumford M, Andes D, Kern L. The Mod 2 Neurocomputer system design. ACTA ACUST UNITED AC 1992; 3:423-33. [DOI: 10.1109/72.129415] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
1208
|
Casasent D, Botha E. Optical correlator production system neural net. APPLIED OPTICS 1992; 31:1030-1040. [PMID: 20720718 DOI: 10.1364/ao.31.001030] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
A new neural net is described that can easily and cost-effectively accommodate multiple objects in the field of view in parallel. The use of a correlator achieves shift invariance and accommodates multiple objects in parallel. Distortion-invariant filters provide aspect-invariant distortion. Symbolic encoding, the use of generic object parts, and a production system neural net allow large class problems to be addressed. Optical laboratory data on the production system inputs are provided and emphasized. Test data assume binary inputs, although analog (probability) input neurons are possible.
Collapse
|
1209
|
Otto I, Grandguillaume P, Boutkhil L, Burnod Y, GuigonBurnod E. Direct and Indirect Cooperation between Temporal and Parietal Networks for Invariant Visual Recognition. J Cogn Neurosci 1992; 4:35-57. [DOI: 10.1162/jocn.1992.4.1.35] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
A new type of biologically inspired multilayered network is proposed to model the properties of the primate visual system with respect to invariant visual recognition (IVR). This model is based on 10 major neurobiological and psychological constraints. The first five constraints shape the architecture and properties of the network.
1. The network model has a Y-like double-branched multilayered architecture, with one input (the retina) and two parallel outputs, the “What” and the “Where,” which model, respectively, the temporal pathway, specialized for “object” identification, and the parietal pathway specialized for “spatial” localization.
2. Four processing layers are sufficient to model the main functional steps of primate visual system that transform the retinal information into prototypes (object-centered reference frame) in the “What” branch and into an oculomotor command in the “Where” branch.
3. The distribution of receptive field sizes within and between the two functional pathways provides an appropriate tradeoff between discrimination and invariant recognition capabilities.
4. The two outputs are represented by a population coding: the ocular command is computed as a population vector in the “Where” branch and the prototypes are coded in a “semidistributed” way in the “What” branch. In the intermediate associative steps, processing units learn to associate prototypes (through feedback connections) to component features (through feedforward ones).
5. The basic processing units of the network do not model single cells but model the local neuronal circuits that combine different information flows organized in separate cortical layers.
Such a biologically constrained model shows shift-invariant and size-invariant capabilities that resemble those of humans (psychological constraints):
6. During the Learning session, a set of patterns (26 capital letters and 2 geometric figures) are presented to the network: a single presentation of each pattern in one position (at the center) and with one size is sufficient to learn the corresponding prototypes (internal representations).
These patterns are thus presented in widely varying new sizes and positions during the Recognition session:
7. The “What” branch of the network succeeds in immediate recognition for patterns presented in the central zone of the retina with the learned size.
8. The recognition by the “What” branch is resistant to changes in size within a limited range of variation related to the distribution of receptive field (RF) sizes in the successive processing steps of this pathway.
9. Even when ocular movements are not allowed, the recognition capabilities of the “What” branch are unaffected by changing positions around the learned one. This significant shift-invariance of the “What” branch is also related to the distribution of RF sizes.
10. When varying both sizes and locations, the “What” and the “Where” branches cooperate for recognition: the location coding in the “Where” branch can command, under the control of the “What” branch, an ocular movement efficient to reset peripheral patterns toward the central zone of the retina until successful recognition.
This model results in predictions about anatomical connections and physiological interactions between temporal and parietal cortices.
Collapse
|
1210
|
White B, Elmasry M. The digi-neocognitron: a digital neocognitron neural network model for VLSI. ACTA ACUST UNITED AC 1992; 3:73-85. [DOI: 10.1109/72.105419] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
1211
|
Abstract
The visual system can reliably identify objects even when the retinal image is transformed considerably by commonly occurring changes in the environment. A local learning rule is proposed, which allows a network to learn to generalize across such transformations. During the learning phase, the network is exposed to temporal sequences of patterns undergoing the transformation. An application of the algorithm is presented in which the network learns invariance to shift in retinal position. Such a principle may be involved in the development of the characteristic shift invariance property of complex cells in the primary visual cortex, and also in the development of more complicated invariance properties of neurons in higher visual areas.
Collapse
Affiliation(s)
- Peter Földiák
- Physiological Laboratory, University of Cambridge, Downing Street, Cambridge CB2 3EG, U.K
| |
Collapse
|
1212
|
Traven H. A neural network approach to statistical pattern classification by 'semiparametric' estimation of probability density functions. ACTA ACUST UNITED AC 1991; 2:366-77. [DOI: 10.1109/72.97913] [Citation(s) in RCA: 138] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
1213
|
Fukushima K, Wake N. Handwritten alphanumeric character recognition by the neocognitron. ACTA ACUST UNITED AC 1991; 2:355-65. [DOI: 10.1109/72.97912] [Citation(s) in RCA: 162] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
1214
|
López-Aligué FJ, Acevedo-Sotoca I, Jarmillo-Moran MA. The fuzziness of fuzzy partitions. Pattern Recognit Lett 1991. [DOI: 10.1016/0167-8655(91)90409-f] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
1215
|
|
1216
|
Alexandre F, Guyot F, Haton JP, Burnod Y. The cortical column: A new processing unit for multilayered networks. Neural Netw 1991. [DOI: 10.1016/0893-6080(91)90027-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
1217
|
Affiliation(s)
- S Hampson
- Department of Information and Computer Science, University of California, Irvine 92717
| |
Collapse
|
1218
|
Lee S, Kil RM. A Gaussian potential function network with hierarchically self-organizing learning. Neural Netw 1991. [DOI: 10.1016/0893-6080(91)90005-p] [Citation(s) in RCA: 79] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
1219
|
Abstract
Abstract
Selective visual attention serializes the processing of stimulus data to make efficient use of limited processing resources in the human visual system. This paper describes a connectionist network that exhibits a variety of attentional phenomena reported by Treisman, Wolford, Duncan, and others. As demonstrated in several simulations, a hierarchical, multiscale network that uses feature arrays with strong lateral inhibitory connections provides responses in agreement with a number of prominent behaviors associated with visual attention. The overall network design is consistent with a range of data reported in the psychological literature, and with neurophysiol-ogical characteristics of primate vision.
Collapse
|
1220
|
Wang CH, Jenkins BK. Subtracting incoherent optical neuron model: analysis, experiment, and applications. APPLIED OPTICS 1990; 29:2171-2186. [PMID: 20563146 DOI: 10.1364/ao.29.002171] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
To fully use the advantages of optics in optical neural networks, an incoherent optical neuron (ION) model is proposed. The main purpose of this model is to provide for the requisite subtraction of signals without the phase sensitivity of a fully coherent system and without the cumbrance of photon-electron conversion and electronic subtraction. The ION model can subtract inhibitory from excitatory neuron inputs by using two device responses. Functionally it accommodates positive and negative weights, excitatory and inhibitory inputs, non-negative neuron outputs, and can be used in a variety of neural network models. This technique can implement conventional inner-product neuron units and Grossberg's mass action law neuron units. Some implementation considerations, such as the effect of nonlinearities on device response, noise, and fan-in/fan-out capability, are discussed and simulated by computer. An experimental demonstration of optical excitation and inhibition on a 2-D array of neuron units using a single Hughes liquid crystal light valve is also reported.
Collapse
|
1221
|
Ranai K, Tan CL, Chan SC. Artificial Intelligence research at the National University of Singapore. Artif Intell Rev 1990. [DOI: 10.1007/bf02221494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
1222
|
|
1223
|
|
1224
|
Hirai Y, Tsukui Y. Position independent pattern matching by neural network. ACTA ACUST UNITED AC 1990. [DOI: 10.1109/21.105081] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
1225
|
LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput 1989. [DOI: 10.1162/neco.1989.1.4.541] [Citation(s) in RCA: 5492] [Impact Index Per Article: 152.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.
Collapse
Affiliation(s)
- Y. LeCun
- AT&T Bell Laboratories, Holmdel, NJ 07733 USA
| | - B. Boser
- AT&T Bell Laboratories, Holmdel, NJ 07733 USA
| | | | | | | | - W. Hubbard
- AT&T Bell Laboratories, Holmdel, NJ 07733 USA
| | | |
Collapse
|
1226
|
|
1227
|
Abstract
What use can the brain make of the massive flow of sensory information that occurs without any associated rewards or punishments? This question is reviewed in the light of connectionist models of unsupervised learning and some older ideas, namely the cognitive maps and working models of Tolman and Craik, and the idea that redundancy is important for understanding perception (Attneave 1954), the physiology of sensory pathways (Barlow 1959), and pattern recognition (Watanabe 1960). It is argued that (1) The redundancy of sensory messages provides the knowledge incorporated in the maps or models. (2) Some of this knowledge can be obtained by observations of mean, variance, and covariance of sensory messages, and perhaps also by a method called “minimum entropy coding.” (3) Such knowledge may be incorporated in a model of “what usually happens” with which incoming messages are automatically compared, enabling unexpected discrepancies to be immediately identified. (4) Knowledge of the sort incorporated into such a filter is a necessary prerequisite of ordinary learning, and a representation whose elements are independent makes it possible to form associations with logical functions of the elements, not just with the elements themselves.
Collapse
Affiliation(s)
- H.B. Barlow
- Kenneth Craik Laboratory, Physiological Laboratory, Downing Street, Cambridge, CB2 3EG, England
| |
Collapse
|
1228
|
|
1229
|
|
1230
|
|
1231
|
|
1232
|
|
1233
|
|
1234
|
|
1235
|
Fukushima K. Neural network model for selective attention in visual pattern recognition and associative recall. APPLIED OPTICS 1987; 26:4985-4992. [PMID: 20523477 DOI: 10.1364/ao.26.004985] [Citation(s) in RCA: 39] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
A neural network model of selective attention is discussed. When two patterns or more are presented simultaneously, the model successively pays selective attention to each one, segmenting it from the rest and recognizing it separately. In the presence of noise or defects, the model can recall the complete pattern in which the noise has been eliminated and the defects corrected. These operations can be successfully performed regardless of deformation of the input patterns. This is an improved version of the earlier model proposed by the author: the ability of segmentation is improved by lateral inhibition.
Collapse
|
1236
|
The Adaptive Self-Organization of Serial Order in Behavior: Speech, Language, And Motor Control. THE ADAPTIVE BRAIN II - VISION, SPEECH, LANGUAGE, AND MOTOR CONTROL 1987. [DOI: 10.1016/s0166-4115(08)61766-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
1237
|
A massively parallel architecture for a self-organizing neural pattern recognition machine. ACTA ACUST UNITED AC 1987. [DOI: 10.1016/s0734-189x(87)80014-2] [Citation(s) in RCA: 1743] [Impact Index Per Article: 45.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
1238
|
Fukushima K. Self-organizing neural network models for visual pattern recognition. ACTA NEUROCHIRURGICA. SUPPLEMENTUM 1987; 41:51-67. [PMID: 3481940 DOI: 10.1007/978-3-7091-8945-0_8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Two neural network models for visual pattern recognition are discussed. The first model, called a "neocognitron", is a hierarchical multilayered network which has only afferent synaptic connections. It can acquire the ability to recognize patterns by "learning-without-a-teacher": the repeated presentation of a set of training patterns is sufficient, and no information about the categories of the patterns is necessary. The cells of the highest stage eventually become "gnostic cells", whose response shows the final result of the pattern-recognition of the network. Pattern recognition is performed on the basis of similarity in shape between patterns, and is not affected by deformation, nor by changes in size, nor by shifts in the position of the stimulus pattern. The second model has not only afferent but also efferent synaptic connections, and is endowed with the function of selective attention. The afferent and the efferent signals interact with each other in the hierarchical network: the efferent signals, that is, the signals for selective attention, have a facilitating effect on the afferent signals, and at the same time, the afferent signals gate efferent signal flow. When a complex figure, consisting of two patterns or more, is presented to the model, it is segmented into individual patterns, and each pattern is recognized separately. Even if one of the patterns to which the models is paying selective attention is affected by noise or defects, the model can "recall" the complete pattern from which the noise has been eliminated and the defects corrected.
Collapse
Affiliation(s)
- K Fukushima
- NHK Science and Technical Research Laboratories, Tokyo, Japan
| |
Collapse
|
1239
|
Abstract
A new conceptual framework and a minimization principle together provide an understanding of computation in model neural circuits. The circuits consist of nonlinear graded-response model neurons organized into networks with effectively symmetric synaptic connections. The neurons represent an approximation to biological neurons in which a simplified set of important computational properties is retained. Complex circuits solving problems similar to those essential in biology can be analyzed and understood without the need to follow the circuit dynamics in detail. Implementation of the model with electronic devices will provide a class of electronic circuits of novel form and function.
Collapse
|
1240
|
Three frames suffice. Behav Brain Sci 1985. [DOI: 10.1017/s0140525x0002077x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
1241
|
Reliable computation in parallel networks. Behav Brain Sci 1985. [DOI: 10.1017/s0140525x0002080x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
1242
|
Linking features in dimensions of mind and brain. Behav Brain Sci 1985. [DOI: 10.1017/s0140525x00020744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
1243
|
The cognitive map overlaps the environmental frame, the situation, and the real-world formulary. Behav Brain Sci 1985. [DOI: 10.1017/s0140525x00020793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
1244
|
Tunnel vision will not suffice. Behav Brain Sci 1985. [DOI: 10.1017/s0140525x00020835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
1245
|
Head-centered coordinates and the stable feature frame. Behav Brain Sci 1985. [DOI: 10.1017/s0140525x00020719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
1246
|
|
1247
|
Grossberg S. Some psychophysiological and pharmacological correlates of a developmental, cognitive and motivational theory. Ann N Y Acad Sci 1984; 425:58-151. [PMID: 6146280 DOI: 10.1111/j.1749-6632.1984.tb23523.x] [Citation(s) in RCA: 24] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
1248
|
Tsutsumi K, Matsumoto H. A synaptic modification algorithm in consideration of the generation of rhythmic oscillation in a ring neural network. BIOLOGICAL CYBERNETICS 1984; 50:419-430. [PMID: 6487679 DOI: 10.1007/bf00335199] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
In consideration of the generation of bursts of nerve impulses (that is, rhythmic oscillation in impulse density) in the ring neural network, a synaptic modification algorithm is newly proposed. Rhythmic oscillation generally occurs in the regular ring network with feedback inhibition and in fact such signals can be observed in the real nervous system. Since, however, various additional connections can cause a disturbance which easily extinguishes the rhythmic oscillation in the network, some function for maintaining the rhythmic oscillation is to be expected to exist in the synapses if such signals play an important part in the nervous system. Our preliminary investigation into the rhythmic oscillation in the regular ring network has led to the selection of the parameters, that is, the average membrane potential (AMP) and the average impulse density (AID) in the synaptic modification algorithm, where the decrease of synaptic strength is supposed to be essential. This synaptic modification algorithm using AMP and AID enables both the rhythmic oscillation and the nonoscillatory state to be dealt with in the algorithm without distinction. Simulation demonstrates cases in which the algorithm catches and holds the rhythmic oscillation in the disturbed ring network where the rhythmic oscillation was previously extinguished.
Collapse
|
1249
|
Fukushima K. A hierarchical neural network model for associative memory. BIOLOGICAL CYBERNETICS 1984; 50:105-113. [PMID: 6722206 DOI: 10.1007/bf00337157] [Citation(s) in RCA: 42] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
A hierarchical neural network model with feedback interconnections, which has the function of associative memory and the ability to recognize patterns, is proposed. The model consists of a hierarchical multi-layered network to which efferent connections are added, so as to make positive feedback loops in pairs with afferent connections. The cell-layer at the initial stage of the network is the input layer which receives the stimulus input and at the same time works as an output layer for associative recall. The deepest layer is the output layer for pattern-recognition. Pattern-recognition is performed hierarchically by integrating information by converging afferent paths in the network. For the purpose of associative recall, the integrated information is again distributed to lower-order cells by diverging efferent paths. These two operations progress simultaneously in the network. If a fragment of a training pattern is presented to the network which has completed its self-organization, the entire pattern will gradually be recalled in the initial layer. If a stimulus consisting of a number of training patterns superposed is presented, one pattern gradually becomes predominant in the recalled output after competition between the patterns, and the others disappear. At about the same time when the recalled pattern reaches a steady state in the initial layer, in the deepest layer of the network, a response is elicited from the cell corresponding to the category of the finally-recalled pattern. Once a steady state has been reached, the response of the network is automatically extinguished by inhibitory signals from a steadiness-detecting cell.(ABSTRACT TRUNCATED AT 250 WORDS)
Collapse
|
1250
|
Nelson TJ. A neural network model for cognitive activity. BIOLOGICAL CYBERNETICS 1983; 49:79-88. [PMID: 6661446 DOI: 10.1007/bf00320388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
A consideration of the storage of information as an energized neuronal state leads to the development of a new type of neural network model which is capable of pattern recognition, concept formation and recognition of patterns of events in time. The network consists of several layers of cells, each cell representing by connections from the lower levels some combination of features or concepts. Information travels toward higher layers by such connections during an association phase, and then reverses during a recognition phase, where higher-order concepts can redirect the flow to more appropriate elements, revising the perception of the environment. This permits a more efficient method of distinguishing closely-related patterns and also permits the formation of negative associations, which is a likely requirement for formation of "abstract" concepts.
Collapse
|