1
|
Fan X, Li C, Yuan X, Dong X, Liang J. An interactive visual analytics approach for network anomaly detection through smart labeling. J Vis (Tokyo) 2019. [DOI: 10.1007/s12650-019-00580-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
2
|
Learning sampling distribution for motion planning with local reconstruction-based self-organizing incremental neural network. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04370-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
3
|
Gene selection and disease prediction from gene expression data using a two-stage hetero-associative memory. PROGRESS IN ARTIFICIAL INTELLIGENCE 2018. [DOI: 10.1007/s13748-018-0148-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
4
|
|
5
|
Nakamura Y, Hasegawa O. Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:8-17. [PMID: 26812736 DOI: 10.1109/tnnls.2015.2489225] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Collapse
|
6
|
A Self-Organizing Incremental Spatiotemporal Associative Memory Networks Model for Problems with Hidden State. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2016; 2016:7158507. [PMID: 27891146 PMCID: PMC5112352 DOI: 10.1155/2016/7158507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Revised: 07/23/2016] [Accepted: 07/27/2016] [Indexed: 11/18/2022]
Abstract
Identifying the hidden state is important for solving problems with hidden state. We prove any deterministic partially observable Markov decision processes (POMDP) can be represented by a minimal, looping hidden state transition model and propose a heuristic state transition model constructing algorithm. A new spatiotemporal associative memory network (STAMN) is proposed to realize the minimal, looping hidden state transition model. STAMN utilizes the neuroactivity decay to realize the short-term memory, connection weights between different nodes to represent long-term memory, presynaptic potentials, and synchronized activation mechanism to complete identifying and recalling simultaneously. Finally, we give the empirical illustrations of the STAMN and compare the performance of the STAMN model with that of other methods.
Collapse
|
7
|
Najjar T, Hasegawa O. Hebbian Network of Self-Organizing Receptive Field Neurons as Associative Incremental Learner. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2015. [DOI: 10.1142/s1469026815500236] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Associative learning plays a major role in the formation of the internal dynamic engine of an adaptive system or a cognitive robot. Interaction with the environment can provide a sparse and discrete set of sample correlations of input–output incidences. These incidences of associative data points can provide useful hints for capturing underlying mechanisms that govern the system’s behavioral dynamics. In many approaches to solving this problem, of learning system’s input–output relation, a set of previously prepared data points need to be presented to the learning mechanism, as a training data, before a useful estimations can be obtained. Besides data-coding is usually based on symbolic or nonimplicit representation schemes. In this paper, we propose an incremental learning mechanism that can bootstrap from a state of complete ignorance of any representative sample associations. Besides, the proposed system provides a novel mechanism for data representation in nonlinear manner through the fusion of self-organizing maps and Gaussian receptive fields. Our architecture is based solely on cortically-inspired techniques of coding and learning as: Hebbian plasticity and adaptive populations of neural circuitry for stimuli representation. We define a neural network that captures the problem’s data space components using emergent arrangement of receptive field neurons that self-organize incrementally in response to sparse experiences of system–environment interactions. These learned components are correlated using a process of Hebbian plasticity that relates major components of input space to those of the output space. The viability of the proposed mechanism is demonstrated through multiple experimental setups from real-world regression and robotic arm sensory-motor learning problems.
Collapse
Affiliation(s)
- Tarek Najjar
- Department of Computational Intelligence and Systems Science Tokyo Institute of Technology 4259 Nagatsuta-cho, Midori-ku Yokohama 226-8503, Japan
| | - Osamu Hasegawa
- Tokyo Institute of Technology Imaging Science and Engineering Laboratory 4259 Nagatsuta-cho, Midori-ku Yokohama 226-8503, Japan
| |
Collapse
|
8
|
Bavafaye Haghighi E, Palm G, Rahmati M, Yazdanpanah M. A new class of multi-stable neural networks: Stability analysis and learning process. Neural Netw 2015; 65:53-64. [DOI: 10.1016/j.neunet.2015.01.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2014] [Revised: 12/22/2014] [Accepted: 01/12/2015] [Indexed: 11/25/2022]
|
9
|
Towards Autonomous Robots Via an Incremental Clustering and Associative Learning Architecture. Cognit Comput 2014. [DOI: 10.1007/s12559-014-9311-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
10
|
Tokunaga K. Growing topology representing network. Appl Soft Comput 2014. [DOI: 10.1016/j.asoc.2014.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
11
|
A 0.7V low-power fully programmable Gaussian function generator for brain-inspired Gaussian correlation associative memory. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.02.060] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
12
|
Wu H, Liao X, Feng W, Guo S. Mean square stability of uncertain stochastic BAM neural networks with interval time-varying delays. Cogn Neurodyn 2013; 6:443-58. [PMID: 24082964 DOI: 10.1007/s11571-012-9200-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2011] [Revised: 02/10/2012] [Accepted: 03/25/2012] [Indexed: 11/27/2022] Open
Abstract
The robust asymptotic stability analysis for uncertain BAM neural networks with both interval time-varying delays and stochastic disturbances is considered. By using the stochastic analysis approach, employing some free-weighting matrices and introducing an appropriate type of Lyapunov functional which takes into account the ranges for delays, some new stability criteria are established to guarantee the delayed BAM neural networks to be robustly asymptotically stable in the mean square. Unlike the most existing mean square stability conditions for BAM neural networks, the supplementary requirements that the time derivatives of time-varying delays must be smaller than 1 are released and the lower bounds of time varying delays are not restricted to be 0. Furthermore, in the proposed scheme, the stability conditions are delay-range-dependent and rate-dependent/independent. As a result, the new criteria are applicable to both fast and slow time-varying delays. Three numerical examples are given to illustrate the effectiveness of the proposed criteria.
Collapse
Affiliation(s)
- Haixia Wu
- College of Computer Science, Chongqing University, Chongqing, 400030 People's Republic of China ; Department of Computer Science, Chongqing Education College, Chongqing, 400067 People's Republic of China
| | | | | | | |
Collapse
|
13
|
Shen F, Ouyang Q, Kasai W, Hasegawa O. A general associative memory based on self-organizing incremental neural network. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2012.10.003] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
14
|
Li J, Katori Y, Kohno T. An FPGA-Based Silicon Neuronal Network with Selectable Excitability Silicon Neurons. Front Neurosci 2012; 6:183. [PMID: 23269911 PMCID: PMC3529302 DOI: 10.3389/fnins.2012.00183] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2012] [Accepted: 12/04/2012] [Indexed: 11/13/2022] Open
Abstract
This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN) and the transmitter release based silicon synapse, allow us to tune the excitability of silicon neurons and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with 256 full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs.
Collapse
Affiliation(s)
- Jing Li
- Graduate School of Engineering, The University of Tokyo Tokyo, Japan
| | | | | |
Collapse
|
15
|
Shimizu T, Saegusa R, Ikemoto S, Ishiguro H, Metta G. Self-protective whole body motion for humanoid robots based on synergy of global reaction and local reflex. Neural Netw 2012; 32:109-18. [PMID: 22377658 DOI: 10.1016/j.neunet.2012.02.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2011] [Revised: 01/27/2012] [Accepted: 02/07/2012] [Indexed: 11/19/2022]
Abstract
This paper describes a self-protective whole body motor controller to enable life-long learning of humanoid robots. In order to reduce the damages on robots caused by physical interaction such as obstacle collision, we introduce self-protective behaviors based on the adaptive coordination of full-body global reactions and local limb reflexes. Global reactions aim at adaptive whole-body movements to prepare for harmful situations. The system incrementally learns a more effective association of the states and global reactions. Local reflexes based on a force-torque sensing function to reduce the impact load on the limbs independently of high-level motor intention. We examined the proposed method with a robot simulator in various conditions. We then applied the systems on a real humanoid robot.
Collapse
Affiliation(s)
- Toshihiko Shimizu
- Department of Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Via Morego, 30 16163 Genova, Italy.
| | | | | | | | | |
Collapse
|
16
|
|
17
|
Tscherepanow M, Kortkamp M, Kammer M. A hierarchical ART network for the stable incremental learning of topological structures and associations from noisy data. Neural Netw 2011; 24:906-16. [PMID: 21704496 DOI: 10.1016/j.neunet.2011.05.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2010] [Revised: 04/13/2011] [Accepted: 05/26/2011] [Indexed: 12/01/2022]
Abstract
In this article, a novel unsupervised neural network combining elements from Adaptive Resonance Theory and topology-learning neural networks is presented. It enables stable on-line clustering of stationary and non-stationary input data by learning their inherent topology. Here, two network components representing two different levels of detail are trained simultaneously. By virtue of several filtering mechanisms, the sensitivity to noise is diminished, which renders the proposed network suitable for the application to real-world problems. Furthermore, we demonstrate that this network constitutes an excellent basis to learn and recall associations between real-world associative keys. Its incremental nature ensures that the capacity of the corresponding associative memory fits the amount of knowledge to be learnt. Moreover, the formed clusters efficiently represent the relations between the keys, even if noisy data is used for training. In addition, we present an iterative recall mechanism to retrieve stored information based on one of the associative keys used for training. As different levels of detail are learnt, the recall can be performed with different degrees of accuracy.
Collapse
Affiliation(s)
- Marko Tscherepanow
- Bielefeld University, Applied Informatics, Universitätsstraße 25, 33615 Bielefeld, Germany.
| | | | | |
Collapse
|
18
|
From Bidirectional Associative Memory to a noise-tolerant, robust Protein Processor Associative Memory. ARTIF INTELL 2011. [DOI: 10.1016/j.artint.2010.10.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
19
|
Tan J, Chai Quek. A BCM Theory of Meta-Plasticity for Online Self-Reorganizing Fuzzy-Associative Learning. ACTA ACUST UNITED AC 2010; 21:985-1003. [DOI: 10.1109/tnn.2010.2046747] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|