1
|
Yang L, Liu H, Sun D, McLoone S, Liu K, Philip Chen CL. Robust Temporal Link Prediction in Dynamic Complex Networks via Stable Gated Models With Reinforcement Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:6095-6106. [PMID: 38743535 DOI: 10.1109/tnnls.2024.3398253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Temporal link prediction is one of the most important tasks for predicting time-varying links by capturing dynamics within complex networks. However, it suffers from difficulties such as vulnerability to adversarial attacks and inadaptation to distinct evolutionary patterns. In this article, we propose a robust temporal link prediction architecture via stable gated models with reinforcement learning (SAGE-RL) consisting of a state encoding network (SEN) and a self-adaptive policy network (SPN). The former is utilized to capture network dynamics, while the latter helps the former adapt to distinct evolutionary patterns across various time periods. Within the SEN, a novel stable gate is introduced to ensure multiple spatiotemporal dependency paths and defend against adversarial attacks. An SPN is proposed to select different SEN instances by approximating the optimal action function, thereby adapting to various evolutionary patterns to learn the robust temporal and structural features from dynamic complex networks. It is proven that SAGE-LR with integral Lipschitz graph convolution is stable to relative perturbations in dynamic complex networks. With the aid of extensive experiments on five real-world graph benchmarks, SAGE-LR is shown to substantially outperform current state-of-the-art approaches in terms of precision and stability of temporal link prediction and ability to successfully defend against various attacks. We also implement the temporal link prediction in shipping transaction networks, which forecast effectively its potential transaction risks.
Collapse
|
2
|
Xu Y, Zhang W, Xu X, Li B, Zhang Y. Scalable and Effective Temporal Graph Representation Learning With Hyperbolic Geometry. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:6080-6094. [PMID: 38728127 DOI: 10.1109/tnnls.2024.3394161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2024]
Abstract
Real-life graphs often exhibit intricate dynamics that evolve continuously over time. To effectively represent continuous-time dynamic graphs (CTDGs), various temporal graph neural networks (TGNNs) have been developed to model their dynamics and topological structures in Euclidean space. Despite their notable achievements, the performance of Euclidean-based TGNNs is limited and bounded by the representation capabilities of Euclidean geometry, particularly for complex graphs with hierarchical and power-law structures. This is because Euclidean space does not have enough room (its volume grows polynomially with respect to radius) to learn hierarchical structures that expand exponentially. As a result, this leads to high-distortion embeddings and suboptimal temporal graph representations. To break the limitations and enhance the representation capabilities of TGNNs, in this article, we propose a scalable and effective TGNN with hyperbolic geometries for CTDG representation (called ${\mathrm { STGN}}^{h}$ ). It captures evolving behaviors and stores hierarchical structures simultaneously by integrating a memory-based module and a structure-based module into a unified framework, which can scale to billion-scale graphs. Concretely, a simple hyperbolic update gate (HuG) is designed as the memory-based module to store temporal dynamics efficiently; for the structure-based module, we propose an effective hyperbolic temporal Transformer (HyT) model to capture complex graph structures and generate up-to-date node embeddings. Extensive experimental results on a variety of medium-scale and billion-scale graphs demonstrate the superiority of the proposed ${\mathrm { STGN}}^{h}$ for CTDG representation, as it significantly outperforms baselines in various downstream tasks.
Collapse
|
3
|
Liu F, Tian J, Miranda-Moreno L, Sun L. Adversarial Danger Identification on Temporally Dynamic Graphs. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:4744-4755. [PMID: 37028290 DOI: 10.1109/tnnls.2023.3252175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multivariate time series forecasting plays an increasingly critical role in various applications, such as power management, smart cities, finance, and healthcare. Recent advances in temporal graph neural networks (GNNs) have shown promising results in multivariate time series forecasting due to their ability to characterize high-dimensional nonlinear correlations and temporal patterns. However, the vulnerability of deep neural networks (DNNs) constitutes serious concerns about using these models to make decisions in real-world applications. Currently, how to defend multivariate forecasting models, especially temporal GNNs, is overlooked. The existing adversarial defense studies are mostly in static and single-instance classification domains, which cannot apply to forecasting due to the generalization challenge and the contradiction issue. To bridge this gap, we propose an adversarial danger identification method for temporally dynamic graphs to effectively protect GNN-based forecasting models. Our method consists of three steps: 1) a hybrid GNN-based classifier to identify dangerous times; 2) approximate linear error propagation to identify the dangerous variates based on the high-dimensional linearity of DNNs; and 3) a scatter filter controlled by the two identification processes to reform time series with reduced feature erasure. Our experiments, including four adversarial attack methods and four state-of-the-art forecasting models, demonstrate the effectiveness of the proposed method in defending forecasting models against adversarial attacks.
Collapse
|
4
|
Wang Z, Yang P, Hu L, Zhang B, Lin C, Lv W, Wang Q. SLAPP: Subgraph-level attention-based performance prediction for deep learning models. Neural Netw 2024; 170:285-297. [PMID: 38000312 DOI: 10.1016/j.neunet.2023.11.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 10/30/2023] [Accepted: 11/20/2023] [Indexed: 11/26/2023]
Abstract
The intricacy of the Deep Learning (DL) landscape, brimming with a variety of models, applications, and platforms, poses considerable challenges for the optimal design, optimization, or selection of suitable DL models. One promising avenue to address this challenge is the development of accurate performance prediction methods. However, existing methods reveal critical limitations. Operator-level methods, proficient at predicting the performance of individual operators, often neglect broader graph features, which results in inaccuracies in full network performance predictions. On the contrary, graph-level methods excel in overall network prediction by leveraging these graph features but lack the ability to predict the performance of individual operators. To bridge these gaps, we propose SLAPP, a novel subgraph-level performance prediction method. Central to SLAPP is an innovative variant of Graph Neural Networks (GNNs) that we developed, named the Edge Aware Graph Attention Network (EAGAT). This specially designed GNN enables superior encoding of both node and edge features. Through this approach, SLAPP effectively captures both graph and operator features, thereby providing precise performance predictions for individual operators and entire networks. Moreover, we introduce a mixed loss design with dynamic weight adjustment to reconcile the predictive accuracy between individual operators and entire networks. In our experimental evaluation, SLAPP consistently outperforms traditional approaches in prediction accuracy, including the ability to handle unseen models effectively. Moreover, when compared to existing research, our method demonstrates a superior predictive performance across multiple DL models.
Collapse
Affiliation(s)
- Zhenyi Wang
- School of Computer Science and Technology, Xidian University, Xi'an, 710071, China; The Key Laboratory of Smart Human-Computer Interaction and Wearable Technology of Shaanxi Province, Xi'an, 710071, China.
| | - Pengfei Yang
- School of Computer Science and Technology, Xidian University, Xi'an, 710071, China; The Key Laboratory of Smart Human-Computer Interaction and Wearable Technology of Shaanxi Province, Xi'an, 710071, China.
| | - Linwei Hu
- School of Computer Science and Technology, Xidian University, Xi'an, 710071, China; The Key Laboratory of Smart Human-Computer Interaction and Wearable Technology of Shaanxi Province, Xi'an, 710071, China.
| | - Bowen Zhang
- School of Computer Science and Technology, Xidian University, Xi'an, 710071, China; The Key Laboratory of Smart Human-Computer Interaction and Wearable Technology of Shaanxi Province, Xi'an, 710071, China.
| | - Chengmin Lin
- School of Computer Science and Technology, Xidian University, Xi'an, 710071, China; The Key Laboratory of Smart Human-Computer Interaction and Wearable Technology of Shaanxi Province, Xi'an, 710071, China.
| | - Wenkai Lv
- School of Computer Science and Technology, Xidian University, Xi'an, 710071, China; The Key Laboratory of Smart Human-Computer Interaction and Wearable Technology of Shaanxi Province, Xi'an, 710071, China.
| | - Quan Wang
- School of Computer Science and Technology, Xidian University, Xi'an, 710071, China; The Key Laboratory of Smart Human-Computer Interaction and Wearable Technology of Shaanxi Province, Xi'an, 710071, China.
| |
Collapse
|