1
|
Böttcher L, Porter MA. Complex networks with complex weights. Phys Rev E 2024; 109:024314. [PMID: 38491610 DOI: 10.1103/physreve.109.024314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 12/20/2023] [Indexed: 03/18/2024]
Abstract
In many studies, it is common to use binary (i.e., unweighted) edges to examine networks of entities that are either adjacent or not adjacent. Researchers have generalized such binary networks to incorporate edge weights, which allow one to encode node-node interactions with heterogeneous intensities or frequencies (e.g., in transportation networks, supply chains, and social networks). Most such studies have considered real-valued weights, despite the fact that networks with complex weights arise in fields as diverse as quantum information, quantum chemistry, electrodynamics, rheology, and machine learning. Many of the standard network-science approaches in the study of classical systems rely on the real-valued nature of edge weights, so it is necessary to generalize them if one seeks to use them to analyze networks with complex edge weights. In this paper, we examine how standard network-analysis methods fail to capture structural features of networks with complex edge weights. We then generalize several network measures to the complex domain and show that random-walk centralities provide a useful approach to examine node importances in networks with complex weights.
Collapse
Affiliation(s)
- Lucas Böttcher
- Department of Computational Science and Philosophy, Frankfurt School of Finance and Management, 60322 Frankfurt am Main, Germany
- Department of Medicine, University of Florida, Gainesville, Florida, 32610, USA
| | - Mason A Porter
- Department of Mathematics, University of California, Los Angeles, California 90095, USA
- Department of Sociology, University of California, Los Angeles, California 90095, USA
- Santa Fe Institute, Santa Fe, New Mexico 87501, USA
| |
Collapse
|
2
|
Gao S, Zhou M, Wang Z, Sugiyama D, Cheng J, Wang J, Todo Y. Fully Complex-Valued Dendritic Neuron Model. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2105-2118. [PMID: 34487498 DOI: 10.1109/tnnls.2021.3105901] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A single dendritic neuron model (DNM) that owns the nonlinear information processing ability of dendrites has been widely used for classification and prediction. Complex-valued neural networks that consist of a number of multiple/deep-layer McCulloch-Pitts neurons have achieved great successes so far since neural computing was utilized for signal processing. Yet no complex value representations appear in single neuron architectures. In this article, we first extend DNM from a real-value domain to a complex-valued one. Performance of complex-valued DNM (CDNM) is evaluated through a complex XOR problem, a non-minimum phase equalization problem, and a real-world wind prediction task. Also, a comparative analysis on a set of elementary transcendental functions as an activation function is implemented and preparatory experiments are carried out for determining hyperparameters. The experimental results indicate that the proposed CDNM significantly outperforms real-valued DNM, complex-valued multi-layer perceptron, and other complex-valued neuron models.
Collapse
|
3
|
|
4
|
Kobayashi M. Two-Level Complex-Valued Hopfield Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2274-2278. [PMID: 32479402 DOI: 10.1109/tnnls.2020.2995413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In multistate neural associative memories, some neurons have small noise and the others have large noise. If we know which neurons have small noise, the noise tolerance could be improved. In this brief, we provide a novel method to reinforce neurons with small noise and apply our new method to images with the Gaussian noise. A complex-valued multistate neuron is decomposed to two neurons, referred to as high and low neurons. For the Gaussian noise, the high neurons are expected to have small noise. The noise tolerance is improved by reinforcement of high neurons. The computer simulations support the efficiency of reinforced neurons.
Collapse
|
5
|
|
6
|
Li L, Chen W, Wu X. Global Exponential Stability and Synchronization for Novel Complex-Valued Neural Networks With Proportional Delays and Inhibitory Factors. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:2142-2152. [PMID: 31647457 DOI: 10.1109/tcyb.2019.2946076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this article, complex-valued neural networks (CVNNs) with proportional delays and inhibitory factors are proposed. First, the global exponential stability of the model addressed is investigated by employing the Halanay inequality technique and the matrix measure method. Some criteria are derived to guarantee the global exponential stability of CVNNs with proportional delays and inhibitory factors. The obtained criteria are applicable not only to systems with proportional delays but also to systems with arbitrary delays. Here, the Lyapunov functions are not constructed. Compared with the Lyapunov method, the matrix measure method makes the obtained criteria more concise, and the Halanay inequality makes the analytical procedure more compact. Furthermore, the global exponential synchronization of two neural-network models with proportional delays and inhibitory factors is also studied. By designing a feedback controller and giving some limitation conditions, the drive system and the response system realize global exponential synchronization. Finally, numerical simulation examples are provided to validate the effectiveness of the theoretical results obtained.
Collapse
|
7
|
Uykan Z. Shadow-Cuts Minimization/Maximization and Complex Hopfield Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:1096-1109. [PMID: 32310787 DOI: 10.1109/tnnls.2020.2980237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this article, we continue our very recent work by extending it to the complex case. Having been inspired by the real Hopfield neural network (HNN) results, our investigations here yield various novel results, some of which are as follows. First, extending the "biased pseudo-cut" concept to the complex HNN (CHNN) case, we introduce a "shadow-cut" that is defined as the sum of intercluster phased edges. Second, while the discrete-time real HNN strictly minimizes the "biased pseudo-cut" in each neuron state change, the CHNN "tends" to minimize the shadow-cut (as the CHNN energy function is minimized). Third, these definitions pose a novel L-phased graph clustering (partitioning) problem in which the sum of the shadow-cuts is minimized (or maximized) for the Hermitian complex and the directed graphs whose edges are (possibly arbitrary positive/negative) complex numbers. Finally, combining the CHNN and the pioneering algorithm GADIA of Babadi and Tarokh and their modified versions, we propose simple indirect algorithms to solve the defined shadow-cuts minimization/maximization problem. The proposed algorithms naturally include the CHNN as well as the GADIA as its special cases. The computer simulations confirm the findings.
Collapse
|
8
|
Kobayashi M. Quaternion Projection Rule for Rotor Hopfield Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:900-908. [PMID: 32287014 DOI: 10.1109/tnnls.2020.2979920] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A rotor Hopfield neural network (RHNN) is an extension of a complex-valued Hopfield neural network (CHNN) and has excellent noise tolerance. The RHNN decomposition theorem says that an RHNN decomposes into a CHNN and a symmetric CHNN. For a large number of training patterns, the projection rule for RHNNs generates large self-feedbacks, which deteriorates the noise tolerance. To remove self-feedbacks, we propose a projection rule using quaternions based on the decomposition theorem. Using computer simulations, we show that the quaternion projection rule improves noise tolerance.
Collapse
|
9
|
Xiao J, Jia Y, Jiang X, Wang S. Circular Complex-Valued GMDH-Type Neural Network for Real-Valued Classification Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5285-5299. [PMID: 32078563 DOI: 10.1109/tnnls.2020.2966031] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recently, applications of complex-valued neural networks (CVNNs) to real-valued classification problems have attracted significant attention. However, most existing CVNNs are black-box models with poor explanation performance. This study extends the real-valued group method of data handling (RGMDH)-type neural network to the complex field and constructs a circular complex-valued group method of data handling (C-CGMDH)-type neural network, which is a white-box model. First, a complex least squares method is proposed for parameter estimation. Second, a new complex-valued symmetric regularity criterion is constructed with a logarithmic function to represent explicitly the magnitude and phase of the actual and predicted complex output to evaluate and select the middle candidate models. Furthermore, the property of this new complex-valued external criterion is proven to be similar to that of the real external criterion. Before training this model, a circular transformation is used to transform the real-valued input features to the complex field. Twenty-five real-valued classification data sets from the UCI Machine Learning Repository are used to conduct the experiments. The results show that both RGMDH and C-CGMDH models can select the most important features from the complete feature space through a self-organizing modeling process. Compared with RGMDH, the C-CGMDH model converges faster and selects fewer features. Furthermore, its classification performance is statistically significantly better than the benchmark complex-valued and real-valued models. Regarding time complexity, the C-CGMDH model is comparable with other models in dealing with the data sets that have few features. Finally, we demonstrate that the GMDH-type neural network can be interpretable.
Collapse
|
10
|
Dong T, Huang T. Neural Cryptography Based on Complex-Valued Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4999-5004. [PMID: 31880562 DOI: 10.1109/tnnls.2019.2955165] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Neural cryptography is a public key exchange algorithm based on the principle of neural network synchronization. By using the learning algorithm of a neural network, the two neural networks update their own weight through exchanging output from each other. Once the synchronization is completed, the weights of the two neural networks are the same. The weights of the neural network can be used for the secret key. However, all the existing works are based on the real-valued neural network model. There are seldom works studying the neural cryptography based on a complex-valued neural network model. In this technical note, a neural cryptography based on the complex-valued tree parity machine network (CVTPM) is proposed. The input, output, and weights of CVTPM are a complex value, which can be considered as an extension of TPM. There are two advantages of the CVTPM: 1) the security of CVTPM is higher than that of TPM with the same hidden units, input neurons, and synaptic depths and 2) the two parties with the CVTPM can exchange two group keys in one neural synchronization process. A series of numerical simulation experiments is provided to verify our results.
Collapse
|
11
|
Chen Y, Wang Z, Wang L, Sheng W. Mixed H 2/H ∞ State Estimation for Discrete-Time Switched Complex Networks With Random Coupling Strengths Through Redundant Channels. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4130-4142. [PMID: 31831450 DOI: 10.1109/tnnls.2019.2952249] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article investigates the mixed H2/H∞ state estimation problem for a class of discrete-time switched complex networks with random coupling strengths through redundant communication channels. A sequence of random variables satisfying certain probability distributions is employed to describe the stochasticity of the coupling strengths. A redundant-channel-based data transmission mechanism is adopted to enhance the reliability of the transmission channel from the sensor to the estimator. The purpose of the addressed problem is to design a state estimator for each node, such that the error dynamics achieves both the stochastic stability (with probability 1) and the prespecified mixed H2/H∞ performance requirement. By using the switched system theory, an extensive stochastic analysis is carried out to derive the sufficient conditions ensuring the stochastic stability as well as the mixed H2/H∞ performance index. The desired state estimator is also parameterized by resorting to the solutions to certain convex optimization problems. A numerical example is provided to illustrate the validity of the proposed estimation scheme.
Collapse
|
12
|
Wang H, Wei G, Wen S, Huang T. Generalized norm for existence, uniqueness and stability of Hopfield neural networks with discrete and distributed delays. Neural Netw 2020; 128:288-293. [PMID: 32454373 DOI: 10.1016/j.neunet.2020.05.014] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 05/11/2020] [Accepted: 05/11/2020] [Indexed: 11/16/2022]
Abstract
In this paper, the existence, uniqueness and stability criteria of solutions for Hopfield neural networks with discrete and distributed delays (DDD HNNs) are investigated by the definitions of three kinds of generalized norm (ξ-norm). A general DDD HNN model is firstly introduced, where the discrete delays τpq(t) are asynchronous time-varying delays. Then, {ξ,1}-norm, {ξ,2}-norm and {ξ,∞}-norm are successively used to derive the existence, uniqueness and stability criteria of solutions for the DDD HNNs. In the proof of theorems, special functions and assumptions are given to deal with discrete and distributed delays. Furthermore, a corollary is concluded for the existence and stability criteria of solutions. The methods given in this paper can also be used to study the synchronization and μ-stability of different DDD NNs. Finally, two numerical examples and their simulation figures are given to illustrate the effectiveness of these results.
Collapse
Affiliation(s)
- Huamin Wang
- Department of Mathematics, Luoyang Normal University, Luoyang, Henan 471934, China
| | - Guoliang Wei
- College of Science, University of Shanghai for Science and Technology, Shanghai 200093, China.
| | - Shiping Wen
- Centre for Artificial Intelligence, Faculty of Engineering & Information Technology, University of Technology Sydney, Sydney, 2007, Australia
| | - Tingwen Huang
- Department of Science, Texas A&M University at Qatar, Doha 23874, Qatar
| |
Collapse
|
13
|
de Castro FZ, Valle ME. A broad class of discrete-time hypercomplex-valued Hopfield neural networks. Neural Netw 2020; 122:54-67. [DOI: 10.1016/j.neunet.2019.09.040] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 09/30/2019] [Accepted: 09/30/2019] [Indexed: 11/28/2022]
|
14
|
Kobayashi M. O(2) -Valued Hopfield Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:3833-3838. [PMID: 30843853 DOI: 10.1109/tnnls.2019.2897994] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In complex-valued Hopfield neural networks (CHNNs), the neuron states are complex numbers whose amplitudes are: 1) they can also be described in special orthogonal matrices of order and 2) here, we propose a new Hopfield model, the O(2) -valued Hopfield neural network [ O(2) -HNN], whose neuron states are extended to orthogonal matrices. Its neuron states are embedded in 4-D space, while those of CHNNs are embedded in 2-D space. Computer simulations were conducted to compare the noise tolerance (NT) and storage capacity (SC) of CHNNs, O(2) -HNNs, and rotor Hopfield neural networks. In terms of SC, O(2) -HNNs outperformed the others, while in NT, they outdid CHNNs.
Collapse
|
15
|
Xiao Q, Huang T, Zeng Z. Global Exponential Stability and Synchronization for Discrete-Time Inertial Neural Networks With Time Delays: A Timescale Approach. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:1854-1866. [PMID: 30387750 DOI: 10.1109/tnnls.2018.2874982] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper considers generalized discrete-time inertial neural network (GDINN). By timescale theory, the original network is rewritten as a timescale-type inertial NN. Two different scenarios are considered. In a first scenario, several criteria guaranteeing the global exponential stability for the addressed GDINN are obtained based on the generalized matrix measure concept. In this case, Lyapunov function or functional is not necessary. In a second scenario, some inequality analytical and scaling techniques are used to achieve the global exponential stability for the considered GDINN. The obtained criteria are also applied to the global exponential synchronization of drive-response GDINNs. Several illustrative examples, including applications to the pseudorandom number generator and encrypted image transmission, are given to show the effectiveness of the theoretical results.
Collapse
|
16
|
Wang Z, Cao J, Guo Z, Huang L. Generalized stability for discontinuous complex-valued Hopfield neural networks via differential inclusions. Proc Math Phys Eng Sci 2018; 474:20180507. [PMID: 30602935 PMCID: PMC6304032 DOI: 10.1098/rspa.2018.0507] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 10/24/2018] [Indexed: 11/12/2022] Open
Abstract
Some dynamical behaviours of discontinuous complex-valued Hopfield neural networks are discussed in this paper. First, we introduce a method to construct the complex-valued set-valued mapping and define some basic definitions for discontinuous complex-valued differential equations. In addition, Leray-Schauder alternative theorem is used to analyse the equilibrium existence of the networks. Lastly, we present the dynamical behaviours, including global stability and convergence in measure for discontinuous complex-valued neural networks (CVNNs) via differential inclusions. The main contribution of this paper is that we extend previous studies on continuous CVNNs to discontinuous ones. Several simulations are given to substantiate the correction of the proposed results.
Collapse
Affiliation(s)
- Zengyun Wang
- The Jiangsu Provincial Key Laboratory of Networked Collective Intelligence, Southeast University, Nanjing 210096, People's Republic of China
- School of Mathematics, Southeast University, Nanjing 210096, People's Republic of China
- Department of Mathematics, Hunan First Normal University, Changsha 410205, People's Republic of China
| | - Jinde Cao
- The Jiangsu Provincial Key Laboratory of Networked Collective Intelligence, Southeast University, Nanjing 210096, People's Republic of China
- School of Mathematics, Southeast University, Nanjing 210096, People's Republic of China
| | - Zhenyuan Guo
- College of Mathematics and Economtrics, Hunan University, Changsha 410082, People's Republic of China
| | - Lihong Huang
- Changsha University of Science and Technology, Changsha 410114, People's Republic of China
| |
Collapse
|
17
|
|
18
|
Kobayashi M. Singularities of Three-Layered Complex-Valued Neural Networks With Split Activation Function. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1900-1907. [PMID: 28422693 DOI: 10.1109/tnnls.2017.2688322] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
There are three important concepts related to learning processes in neural networks: reducibility, nonminimality, and singularity. Although the definitions of these three concepts differ, they are equivalent in real-valued neural networks. This is also true of complex-valued neural networks (CVNNs) with hidden neurons not employing biases. The situation of CVNNs with hidden neurons employing biases, however, is very complicated. Exceptional reducibility was found, and it was shown that reducibility and nonminimality are not the same. Irreducibility consists of minimality and exceptional reducibility. The relationship between minimality and singularity has not yet been established. In this paper, we describe our surprising finding that minimality and singularity are independent. We also provide several examples based on exceptional reducibility.
Collapse
|
19
|
Kobayashi M. Decomposition of Rotor Hopfield Neural Networks Using Complex Numbers. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1366-1370. [PMID: 28182561 DOI: 10.1109/tnnls.2017.2657781] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
A complex-valued Hopfield neural network (CHNN) is a multistate model of a Hopfield neural network. It has the disadvantage of low noise tolerance. Meanwhile, a symmetric CHNN (SCHNN) is a modification of a CHNN that improves noise tolerance. Furthermore, a rotor Hopfield neural network (RHNN) is an extension of a CHNN. It has twice the storage capacity of CHNNs and SCHNNs, and much better noise tolerance than CHNNs, although it requires twice many connection parameters. In this brief, we investigate the relations between CHNN, SCHNN, and RHNN; an RHNN is uniquely decomposed into a CHNN and SCHNN. In addition, the Hebbian learning rule for RHNNs is decomposed into those for CHNNs and SCHNNs.
Collapse
|
20
|
Kobayashi M. Stability of Rotor Hopfield Neural Networks With Synchronous Mode. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:744-748. [PMID: 28055918 DOI: 10.1109/tnnls.2016.2635140] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
A complex-valued Hopfield neural network (CHNN) is a model of a Hopfield neural network using multistate neurons. The stability conditions of CHNNs have been widely studied. A CHNN with a synchronous mode will converge to a fixed point or a cycle of length 2. A rotor Hopfield neural network (RHNN) is also a model of a multistate Hopfield neural network. RHNNs have much higher storage capacity and noise tolerance than CHNNs. We extend the theories regarding the stability of CHNNs to RHNNs. In addition, we investigate the stability of RHNNs with the projection rule. Although a CHNN with projection rule can be trapped at a cycle, an RHNN with projection rule converges to a fixed point. This is one of the great advantages of RHNNs.
Collapse
|
21
|
|
22
|
Fixed points of symmetric complex-valued Hopfield neural networks. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
23
|
|
24
|
|