1
|
Ren L, Wang H, Mo T, Yang LT. A Lightweight Group Transformer-Based Time Series Reduction Network for Edge Intelligence and Its Application in Industrial RUL Prediction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3720-3729. [PMID: 38170656 DOI: 10.1109/tnnls.2023.3347227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Recently, deep learning-based models such as transformer have achieved significant performance for industrial remaining useful life (RUL) prediction due to their strong representation ability. In many industrial practices, RUL prediction algorithms are deployed on edge devices for real-time response. However, the high computational cost of deep learning models makes it difficult to meet the requirements of edge intelligence. In this article, a lightweight group transformer with multihierarchy time-series reduction (GT-MRNet) is proposed to alleviate this problem. Different from most existing RUL methods computing all time series, GT-MRNet can adaptively select necessary time steps to compute the RUL. First, a lightweight group transformer is constructed to extract features by employing group linear transformation with significantly fewer parameters. Then, a time-series reduction strategy is proposed to adaptively filter out unimportant time steps at each layer. Finally, a multihierarchy learning mechanism is developed to further stabilize the performance of time-series reduction. Extensive experimental results on the real-world condition datasets demonstrate that the proposed method can significantly reduce up to 74.7% parameters and 91.8% computation cost without sacrificing accuracy.
Collapse
|
2
|
Li Z, Luo S, Liu H, Tang C, Miao J. TTSNet: Transformer-Temporal Convolutional Network-Self-Attention with Feature Fusion for Prediction of Remaining Useful Life of Aircraft Engines. SENSORS (BASEL, SWITZERLAND) 2025; 25:432. [PMID: 39860801 PMCID: PMC11769482 DOI: 10.3390/s25020432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 12/24/2024] [Accepted: 01/08/2025] [Indexed: 01/27/2025]
Abstract
Accurately predicting the remaining useful life (RUL) is crucial for ensuring the safety and reliability of aircraft engine operation. However, aircraft engines operate in harsh conditions, with the characteristics of high speed, high temperature, and high load, resulting in high-dimensional and noisy data. This makes feature extraction inadequate, leading to low accuracy in the prediction of the RUL of aircraft engines. To address this issue, Transformer-TCN-Self-attention network (TTSNet) with feature fusion, as a parallel three-branch network, is proposed for predicting the RUL of aircraft engines. The model first applies exponential smoothing to smooth the data and suppress noise to the original signal, followed by normalization. Then, it uses a parallel transformer encoder, temporal convolutional network (TCN), and multi-head attention three-branch network to capture both global and local features of the time series. The model further completes feature dimension weight allocation and fusion through a multi-head self-attention mechanism, emphasizing the contribution of different features to the model. Subsequently, it fuses the three parts of features through a linear layer and concatenation. Finally, a fully connected layer is used to establish the mapping relationship between the feature matrix and the RUL label, obtaining the RUL prediction value. The model was validated on the C-MAPSS aircraft engine dataset. Experimental results show that compared to other related RUL models, the RMSE and Score reached 11.02 and 194.6 on dataset FD001, respectively; on dataset FD002, the RMSE and Score reached 13.25 and 874.1, respectively. On dataset FD003, the RMSE and Score reached 11.06 and 200.1 and on dataset FD004, the RMSE and Score reached 18.26 and 1968.5, respectively, demonstrating better performance of RUL prediction.
Collapse
Affiliation(s)
- Zhaofei Li
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644000, China; (Z.L.); (H.L.); (C.T.)
- Key Laboratory of Artificial Intelligence of Sichuan Province, Yibin 644000, China;
| | - Shilin Luo
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644000, China; (Z.L.); (H.L.); (C.T.)
| | - Haiqing Liu
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644000, China; (Z.L.); (H.L.); (C.T.)
| | - Chaobin Tang
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644000, China; (Z.L.); (H.L.); (C.T.)
| | - Jianguo Miao
- Key Laboratory of Artificial Intelligence of Sichuan Province, Yibin 644000, China;
- School of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
3
|
Ren L, Wang H, Laili Y. Diff-MTS: Temporal-Augmented Conditional Diffusion-Based AIGC for Industrial Time Series Toward the Large Model Era. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:7187-7197. [PMID: 39331548 DOI: 10.1109/tcyb.2024.3462500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/29/2024]
Abstract
Industrial multivariate time series (MTS) is a critical view of the industrial field for people to understand the state of machines. However, due to data collection difficulty and privacy concerns, available data for building industrial intelligence and industrial large models is far from sufficient. Therefore, industrial time series data generation is of great importance. Existing research usually applies generative adversarial networks (GANs) to generate MTS. However, GANs suffer from the unstable training process due to the joint training of the generator and discriminator. This article proposes a temporal-augmented conditional adaptive diffusion model, termed Diff-MTS, for MTS generation. It aims to better handle the complex temporal dependencies and dynamics of MTS data. Specifically, a conditional adaptive maximum-mean discrepancy (Ada-MMD) method has been proposed for the controlled generation of MTS, which does not require a classifier to control the generation. It improves the condition consistency of the diffusion model. Moreover, a temporal decomposition reconstruction UNet (TDR-UNet) is established to capture complex temporal patterns and further improve the quality of the synthetic time series. Comprehensive experiments on the C-MAPSS and FEMTO datasets demonstrate that the proposed Diff-MTS performs substantially better in terms of diversity, fidelity, and utility compared with the GAN-based methods. These results show that Diff-MTS facilitates the generation of industrial data, contributing to intelligent maintenance and the construction of industrial large models.
Collapse
|
4
|
Wang D, Wang Y, Xian X, Cheng B. An Adaptation-Aware Interactive Learning Approach for Multiple Operational Condition-Based Degradation Modeling. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:17519-17533. [PMID: 37682649 DOI: 10.1109/tnnls.2023.3305601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
Although degradation modeling has been widely applied to use multiple sensor signals to monitor the degradation process and predict the remaining useful lifetime (RUL) of operating machinery units, three challenging issues remain. One challenge is that units in engineering cases usually work under multiple operational conditions, causing the distribution of sensor signals to vary over conditions. It remains unexplored to characterize time-varying conditions as a distribution shift problem. The second challenge is that sensor signal fusion and degradation status modeling are separated into two independent steps in most of the existing methods, which ignores the intrinsic correlation between the two parts. The last challenge is how to find an accurate health index (HI) of units using previous knowledge of degradation. To tackle these issues, this article proposes an adaptation-aware interactive learning (AAIL) approach for degradation modeling. First, a condition-invariant HI is developed to handle time-varying operation conditions. Second, an interactive framework based on the fusion and degradation model is constructed, which naturally integrates a supervised learner and an unsupervised learner. To estimate the model parameters of AAIL, we propose an interactive training algorithm that shares learned degradation and fusion information during the model training process. A case study that uses the degradation data set of aircraft engines demonstrates that the proposed AAIL outperforms related benchmark methods.
Collapse
|
5
|
Zhang Y, Zhang W, Wang S, Lin N, Yu Y, He Y, Wang B, Jiang H, Lin P, Xu X, Qi X, Wang Z, Zhang X, Shang D, Liu Q, Cheng KT, Liu M. Semantic memory-based dynamic neural network using memristive ternary CIM and CAM for 2D and 3D vision. SCIENCE ADVANCES 2024; 10:eado1058. [PMID: 39141720 PMCID: PMC11323881 DOI: 10.1126/sciadv.ado1058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 07/08/2024] [Indexed: 08/16/2024]
Abstract
The brain is dynamic, associative, and efficient. It reconfigures by associating the inputs with past experiences, with fused memory and processing. In contrast, AI models are static, unable to associate inputs with past experiences, and run on digital computers with physically separated memory and processing. We propose a hardware-software co-design, a semantic memory-based dynamic neural network using a memristor. The network associates incoming data with the past experience stored as semantic vectors. The network and the semantic memory are physically implemented on noise-robust ternary memristor-based computing-in-memory (CIM) and content-addressable memory (CAM) circuits, respectively. We validate our co-designs, using a 40-nm memristor macro, on ResNet and PointNet++ for classifying images and three-dimensional points from the MNIST and ModelNet datasets, which achieves not only accuracy on par with software but also a 48.1 and 15.9% reduction in computational budget. Moreover, it delivers a 77.6 and 93.3% reduction in energy consumption.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Woyu Zhang
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- Key Lab of Fabrication Technologies for Integrated Circuits, Chinese Academy of Sciences, Beijing 100049, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shaocong Wang
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Ning Lin
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Yifei Yu
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Yangu He
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Bo Wang
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Hao Jiang
- State Key Laboratory of Integrated Chips and Systems, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
| | - Peng Lin
- College of Computer Science and Technology, Zhejiang University, Zhejiang 310027, China
| | - Xiaoxin Xu
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- Key Lab of Fabrication Technologies for Integrated Circuits, Chinese Academy of Sciences, Beijing 100049, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
| | - Zhongrui Wang
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Xumeng Zhang
- State Key Laboratory of Integrated Chips and Systems, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
| | - Dashan Shang
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- Key Lab of Fabrication Technologies for Integrated Circuits, Chinese Academy of Sciences, Beijing 100049, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qi Liu
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- State Key Laboratory of Integrated Chips and Systems, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
| | - Kwang-Ting Cheng
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Department of Electronic and Computer Engineering, the Hong Kong University of Science and Technology, Hong Kong, China
| | - Ming Liu
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- State Key Laboratory of Integrated Chips and Systems, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
| |
Collapse
|
6
|
Fan Z, Li W, Chang KC. A Two-Stage Attention-Based Hierarchical Transformer for Turbofan Engine Remaining Useful Life Prediction. SENSORS (BASEL, SWITZERLAND) 2024; 24:824. [PMID: 38339540 PMCID: PMC10857698 DOI: 10.3390/s24030824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 01/19/2024] [Accepted: 01/25/2024] [Indexed: 02/12/2024]
Abstract
The accurate estimation of the remaining useful life (RUL) for aircraft engines is essential for ensuring safety and uninterrupted operations in the aviation industry. Numerous investigations have leveraged the success of the attention-based Transformer architecture in sequence modeling tasks, particularly in its application to RUL prediction. These studies primarily focus on utilizing onboard sensor readings as input predictors. While various Transformer-based approaches have demonstrated improvement in RUL predictions, their exclusive focus on temporal attention within multivariate time series sensor readings, without considering sensor-wise attention, raises concerns about potential inaccuracies in RUL predictions. To address this concern, our paper proposes a novel solution in the form of a two-stage attention-based hierarchical Transformer (STAR) framework. This approach incorporates a two-stage attention mechanism, systematically addressing both temporal and sensor-wise attentions. Furthermore, we enhance the STAR RUL prediction framework by integrating hierarchical encoder-decoder structures to capture valuable information across different time scales. By conducting extensive numerical experiments with the CMAPSS datasets, we demonstrate that our proposed STAR framework significantly outperforms the current state-of-the-art models for RUL prediction.
Collapse
Affiliation(s)
- Zhengyang Fan
- Department of Systems Engineering and Operations Research, George Mason University, Fairfax, VA 22030, USA;
| | | | - Kuo-Chu Chang
- Department of Systems Engineering and Operations Research, George Mason University, Fairfax, VA 22030, USA;
| |
Collapse
|