1
|
Lin B, Qian G, Ruan Z, Qian J, Wang S. Complex quantized minimum error entropy with fiducial points: theory and application in model regression. Neural Netw 2025; 187:107305. [PMID: 40068497 DOI: 10.1016/j.neunet.2025.107305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2024] [Revised: 12/07/2024] [Accepted: 02/19/2025] [Indexed: 04/29/2025]
Abstract
Minimum error entropy with fiducial points (MEEF) has gained significant attention due to its excellent performance in mitigating the adverse effects of non-Gaussian noise in the fields of machine learning and signal processing. However, the original MEEF algorithm suffers from high computational complexity due to the double summation of error samples. The quantized MEEF (QMEEF), proposed by Zheng et al. alleviates this computational burden through strategic quantization techniques, providing a more efficient solution. In this paper, we extend the application of these techniques to the complex domain, introducing complex QMEEF (CQMEEF). We theoretically introduce and prove the fundamental properties and convergence of CQMEEF. Furthermore, we apply this novel method to the training of a range of Linear-in-parameters (LIP) models, demonstrating its broad applicability. Experimental results show that CQMEEF achieves high precision in regression tasks involving various noise-corrupted datasets, exhibiting effectiveness under unfavorable conditions, and surpassing existing methods across critical performance metrics. Consequently, CQMEEF not only offers an efficient computational alternative but also opens up new avenues for dealing with complex data in regression tasks.
Collapse
Affiliation(s)
- Bingqing Lin
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
| | - Guobing Qian
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Zongli Ruan
- College of Science, China University of Petroleum, Qingdao 266580, China
| | - Junhui Qian
- School of Microelectronic and Communication Engineering, Chongqing University, Chongqing 400030, China
| | - Shiyuan Wang
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
| |
Collapse
|
2
|
Zhang D, Zhang T, Tao Z, Chen CLP. Broad learning system based on fractional order optimization. Neural Netw 2025; 188:107468. [PMID: 40273541 DOI: 10.1016/j.neunet.2025.107468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 03/21/2025] [Accepted: 04/03/2025] [Indexed: 04/26/2025]
Abstract
Due to its efficient incremental learning performance, the broad learning system (BLS) has received widespread attention in the field of machine learning. Scholars have found in algorithm research that using the maximum correntropy criterion (MCC) can further improves the performance of broad learning in handling outliers. Recent studies have shown that differential equations can be used to represent the forward propagation of deep learning. The BLS based on MCC uses differentiation to optimize parameters, which indicates that differential methods can also be used for BLS optimization. But general methods use integer order differential equations, ignoring system information between integer orders. Due to the long-term memory property of fractional differential equations, this paper innovatively introduces fractional order optimization into the BLS, called FOBLS, to better enhance the data processing capability of the BLS. Firstly, a BLS is constructed using fractional order, incorporating long-term memory characteristics into the weight optimization process. In addition, constructing a dynamic incremental learning system based on fractional order further enhances the ability of network optimization. The experimental results demonstrate the excellent performance of the method proposed in this paper.
Collapse
Affiliation(s)
- Dan Zhang
- College of Mechanical and Electrical Engineering, Dalian Minzu University, Dalian, 116600, China.
| | - Tong Zhang
- Computer Science and Engineering College, South China University of Technology, 510641, Guangzhou, China; Guangdong Provincial Key Laboratory of AI Large Model and Intelligent Cognition, 510006, Guangzhou, China; Pazhou Lab, 510335, Guangzhou, China; Engineering Research Center of the Ministry of Education on Health Intelligent Perception and Paralleled Digital-Human, 510335, Guangzhou, China.
| | - Zhang Tao
- College of Mechanical and Electrical Engineering, Dalian Minzu University, Dalian, 116600, China; Liaoning Provincial Engineering Research Center of Powertrain Design for New Energy Vehicle, Dalian, 116600, China.
| | - C L Philip Chen
- Computer Science and Engineering College, South China University of Technology, 510641, Guangzhou, China.
| |
Collapse
|
3
|
Qian W, Tu Y, Huang J, Shu W, Cheung YM. Partial Multilabel Learning Using Noise-Tolerant Broad Learning System With Label Enhancement and Dimensionality Reduction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3758-3772. [PMID: 38289837 DOI: 10.1109/tnnls.2024.3352285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Partial multilabel learning (PML) addresses the issue of noisy supervision, which contains an overcomplete set of candidate labels for each instance with only a valid subset of training data. Using label enhancement techniques, researchers have computed the probability of a label being ground truth. However, enhancing labels in the noisy label space makes it impossible for the existing partial multilabel label enhancement methods to achieve satisfactory results. Besides, few methods simultaneously involve the ambiguity problem, the feature space's redundancy, and the model's efficiency in PML. To address these issues, this article presents a novel joint partial multilabel framework using broad learning systems (namely BLS-PML) with three innovative mechanisms: 1) a trustworthy label space is reconstructed through a novel label enhancement method to avoid the bias caused by noisy labels; 2) a low-dimensional feature space is obtained by a confidence-based dimensionality reduction method to reduce the effect of redundancy in the feature space; and 3) a noise-tolerant BLS is proposed by adding a dimensionality reduction layer and a trustworthy label layer to deal with PML problem. We evaluated it on six real-world and seven synthetic datasets, using eight state-of-the-art partial multilabel algorithms as baselines and six evaluation metrics. Out of 144 experimental scenarios, our method significantly outperforms the baselines by about 80%, demonstrating its robustness and effectiveness in handling partial multilabel tasks.
Collapse
|
4
|
Liu L, Chen J, Liu T, Philip Chen CL, Yang B. Dynamic Graph Regularized Broad Learning With Marginal Fisher Representation for Noisy Data Classification. IEEE TRANSACTIONS ON CYBERNETICS 2025; 55:50-63. [PMID: 39405152 DOI: 10.1109/tcyb.2024.3471919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Broad learning system (BLS) is an effective neural network requiring no deep architecture, however it is somehow fragile to noisy data. The previous robust broad models directly map features from the raw data, which inevitably learn useless or even harmful features for data representation when the inputs are corrupted by noise and outliers. To address this concern, a discriminative and robust network named as dynamic graph regularized broad learning (DGBL) with marginal fisher representation is proposed for noisy data classification. Different from the previous works, DGBL eliminates the effect of noise before the random feature mapping by the proposed robust and dynamic marginal fisher analysis (RDMFA) algorithm. The RDMFA is able to extract more robust and informative representations for classification from the latent clean data space with dynamically generated graphs. Furthermore, the dynamic graphs learned from RDMFA are incorporated as regularization terms into the objective of DGBL to enhance the discrimination capacity of the proposed network. Extensive quantitative and qualitative experiments conducted on numerous benchmark datasets demonstrate the superiority of the proposed model compared to several state-of-the-art methods.
Collapse
|
5
|
Guo W, Yu J, Zhou C, Yuan X, Wang Z. Bidimensionally partitioned online sequential broad learning system for large-scale data stream modeling. Sci Rep 2024; 14:32009. [PMID: 39738996 DOI: 10.1038/s41598-024-83563-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Accepted: 12/16/2024] [Indexed: 01/02/2025] Open
Abstract
Incremental broad learning system (IBLS) is an effective and efficient incremental learning method based on broad learning paradigm. Owing to its streamlined network architecture and flexible dynamic update scheme, IBLS can achieve rapid incremental reconstruction on the basis of the previous model without the entire retraining from scratch, which enables it adept at handling streaming data. However, two prominent deficiencies still persist in IBLS and constrain its further promotion in large-scale data stream scenarios. Firstly, IBLS needs to retain all historical data and perform associated calculations in the incremental learning process, which causes its computational overhead and storage burden to increase over time and as such puts the efficacy of the algorithm at risk for massive or unlimited data streams. Additionally, due to the random generation rule of hidden nodes, IBLS generally necessitates a large network size to guarantee approximation accuracy, and the resulting high-dimensional matrix calculation poses a greater challenge to the updating efficiency of the model. To address these issues, we propose a novel bidimensionally partitioned online sequential broad learning system (BPOSBLS) in this paper. The core idea of BPOSBLS is to partition the high-dimensional broad feature matrix bidimensionally from the aspects of instance dimension and feature dimension, and consequently decompose a large least squares problem into multiple smaller ones, which can then be solved individually. By doing so, the scale and computational complexity of the original high-order model are substantially diminished, thus significantly improving its learning efficiency and usability for large-scale complex learning tasks. Meanwhile, an ingenious recursive computation method called partitioned recursive least squares is devised to solve the BPOSBLS. This method exclusively utilizes the current online samples for iterative updating, while disregarding the previous historical samples, thereby rendering BPOSBLS a lightweight online sequential learning algorithm with consistently low computational costs and storage requirements. Theoretical analyses and simulation experiments demonstrate the effectiveness and superiority of the proposed algorithm.
Collapse
Affiliation(s)
- Wei Guo
- Jiangsu Provincial University Key Lab of Child Cognitive Development and Mental Health, Yancheng Teachers University, Yancheng, 224002, China
- College of Information Engineering, Yancheng Teachers University, Yancheng, 224002, China
| | - Jianjiang Yu
- Jiangsu Provincial University Key Lab of Child Cognitive Development and Mental Health, Yancheng Teachers University, Yancheng, 224002, China.
| | - Caigen Zhou
- College of Information Engineering, Yancheng Teachers University, Yancheng, 224002, China
| | - Xiaofeng Yuan
- College of Information Engineering, Yancheng Teachers University, Yancheng, 224002, China
| | - Zhanxiu Wang
- College of Information Engineering, Yancheng Teachers University, Yancheng, 224002, China
| |
Collapse
|
6
|
Liu L, Chen J, Yang B, Feng Q, Chen CLP. When Broad Learning System Meets Label Noise Learning: A Reweighting Learning Framework. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:18512-18524. [PMID: 37788190 DOI: 10.1109/tnnls.2023.3317255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Broad learning system (BLS) is a novel neural network with efficient learning and expansion capacity, but it is sensitive to noise. Accordingly, the existing robust broad models try to suppress noise by assigning each sample an appropriate scalar weight to tune down the contribution of noisy samples in network training. However, they disregard the useful information of the noncorrupted elements hidden in the noisy samples, leading to unsatisfactory performance. To this end, a novel BLS with adaptive reweighting (BLS-AR) strategy is proposed in this article for the classification of data with label noise. Different from the previous works, the BLS-AR learns for each sample a weight vector rather than a scalar weight to indicate the noise degree of each element in the sample, which extends the reweighting strategy from sample level to element level. This enables the proposed network to precisely identify noisy elements and thus highlight the contribution of informative ones to train a more accurate representation model. Thanks to the separability of the model, the proposed network can be divided into several subnetworks, each of which can be trained efficiently. In addition, three corresponding incremental learning algorithms of the BLS-AR are developed for adding new samples or expanding the network. Substantial experiments are conducted to explicate the effectiveness and robustness of the proposed BLS-AR model.
Collapse
|
7
|
Zhao H, Lu X. Broad learning system based on maximum multi-kernel correntropy criterion. Neural Netw 2024; 179:106521. [PMID: 39042948 DOI: 10.1016/j.neunet.2024.106521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/26/2024] [Accepted: 07/06/2024] [Indexed: 07/25/2024]
Abstract
The broad learning system (BLS) is an effective machine learning model that exhibits excellent feature extraction ability and fast training speed. However, the traditional BLS is derived from the minimum mean square error (MMSE) criterion, which is highly sensitive to non-Gaussian noise. In order to enhance the robustness of BLS, this paper reconstructs the objective function of BLS based on the maximum multi-kernel correntropy criterion (MMKCC), and obtains a new robust variant of BLS (MKC-BLS). For the multitude of parameters involved in MMKCC, an effective parameter optimization method is presented. The fixed-point iteration method is employed to further optimize the model, and a reliable convergence proof is provided. In comparison to the existing robust variants of BLS, MKC-BLS exhibits superior performance in the non-Gaussian noise environment, particularly in the multi-modal noise environment. Experiments on multiple public datasets and real application validate the efficacy of the proposed method.
Collapse
Affiliation(s)
- Haiquan Zhao
- School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, China.
| | - Xin Lu
- School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, China
| |
Collapse
|
8
|
He Y, Atia GK. Robust Low-Tubal-Rank Tensor Completion Based on Tensor Factorization and Maximum Correntopy Criterion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:14603-14617. [PMID: 37279124 DOI: 10.1109/tnnls.2023.3280086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The goal of tensor completion is to recover a tensor from a subset of its entries, often by exploiting its low-rank property. Among several useful definitions of tensor rank, the low tubal rank was shown to give a valuable characterization of the inherent low-rank structure of a tensor. While some low-tubal-rank tensor completion algorithms with favorable performance have been recently proposed, these algorithms utilize second-order statistics to measure the error residual, which may not work well when the observed entries contain large outliers. In this article, we propose a new objective function for low-tubal-rank tensor completion, which uses correntropy as the error measure to mitigate the effect of the outliers. To efficiently optimize the proposed objective, we leverage a half-quadratic minimization technique whereby the optimization is transformed to a weighted low-tubal-rank tensor factorization problem. Subsequently, we propose two simple and efficient algorithms to obtain the solution and provide their convergence and complexity analysis. Numerical results using both synthetic and real data demonstrate the robust and superior performance of the proposed algorithms.
Collapse
|
9
|
Liu L, Liu T, Chen CLP, Wang Y. Modal-Regression-Based Broad Learning System for Robust Regression and Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12344-12357. [PMID: 37030755 DOI: 10.1109/tnnls.2023.3256999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
A novel neural network, namely, broad learning system (BLS), has shown impressive performance on various regression and classification tasks. Nevertheless, most BLS models may suffer serious performance degradation for contaminated data, since they are derived under the least-squares criterion which is sensitive to noise and outliers. To enhance the model robustness, in this article we proposed a modal-regression-based BLS (MRBLS) to tackle the regression and classification tasks of data corrupted by noise and outliers. Specifically, modal regression is adopted to train the output weights instead of the minimum mean square error (MMSE) criterion. Moreover, the l2,1 -norm-induced constraint is used to encourage row sparsity of the connection weight matrix and achieve feature selection. To effectively and efficiently train the network, the half-quadratic theory is used to optimize MRBLS. The validity and robustness of the proposed method are verified on various regression and classification datasets. The experimental results demonstrate that the proposed MRBLS achieves better performance than the existing state-of-the-art BLS methods in terms of both accuracy and robustness.
Collapse
|
10
|
Shen J, Zhao H, Deng W. Broad Learning System under Label Noise: A Novel Reweighting Framework with Logarithm Kernel and Mixture Autoencoder. SENSORS (BASEL, SWITZERLAND) 2024; 24:4268. [PMID: 39001047 PMCID: PMC11244421 DOI: 10.3390/s24134268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 06/27/2024] [Accepted: 06/29/2024] [Indexed: 07/16/2024]
Abstract
The Broad Learning System (BLS) has demonstrated strong performance across a variety of problems. However, BLS based on the Minimum Mean Square Error (MMSE) criterion is highly sensitive to label noise. To enhance the robustness of BLS in environments with label noise, a function called Logarithm Kernel (LK) is designed to reweight the samples for outputting weights during the training of BLS in order to construct a Logarithm Kernel-based BLS (L-BLS) in this paper. Additionally, for image databases with numerous features, a Mixture Autoencoder (MAE) is designed to construct more representative feature nodes of BLS in complex label noise environments. For the MAE, two corresponding versions of BLS, MAEBLS, and L-MAEBLS were also developed. The extensive experiments validate the robustness and effectiveness of the proposed L-BLS, and MAE can provide more representative feature nodes for the corresponding version of BLS.
Collapse
Affiliation(s)
- Jiuru Shen
- College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China
| | - Huimin Zhao
- College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China
| | - Wu Deng
- College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China
| |
Collapse
|
11
|
Zhang Y, Fang Z, Fan J. Generalization analysis of deep CNNs under maximum correntropy criterion. Neural Netw 2024; 174:106226. [PMID: 38490117 DOI: 10.1016/j.neunet.2024.106226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 02/01/2024] [Accepted: 03/04/2024] [Indexed: 03/17/2024]
Abstract
Convolutional neural networks (CNNs) have gained immense popularity in recent years, finding their utility in diverse fields such as image recognition, natural language processing, and bio-informatics. Despite the remarkable progress made in deep learning theory, most studies on CNNs, especially in regression tasks, tend to heavily rely on the least squares loss function. However, there are situations where such learning algorithms may not suffice, particularly in the presence of heavy-tailed noises or outliers. This predicament emphasizes the necessity of exploring alternative loss functions that can handle such scenarios more effectively, thereby unleashing the true potential of CNNs. In this paper, we investigate the generalization error of deep CNNs with the rectified linear unit (ReLU) activation function for robust regression problems within an information-theoretic learning framework. Our study demonstrates that when the regression function exhibits an additive ridge structure and the noise possesses a finite pth moment, the empirical risk minimization scheme, generated by the maximum correntropy criterion and deep CNNs, achieves fast convergence rates. Notably, these rates align with the mini-max optimal convergence rates attained by fully connected neural network model with the Huber loss function up to a logarithmic factor. Additionally, we further establish the convergence rates of deep CNNs under the maximum correntropy criterion when the regression function resides in a Sobolev space on the sphere.
Collapse
Affiliation(s)
- Yingqiao Zhang
- Department of Mathematics, Hong Kong Baptist University, Kowloon, Hong Kong, China.
| | - Zhiying Fang
- Institute of Applied Mathematics, Shenzhen Polytechnic University, Shahexi Road 4089, Shenzhen, 518000, Guangdong, China.
| | - Jun Fan
- Department of Mathematics, Hong Kong Baptist University, Kowloon, Hong Kong, China.
| |
Collapse
|
12
|
Liu Z, He X. Dynamic Submodular-Based Learning Strategy in Imbalanced Drifting Streams for Real-Time Safety Assessment in Nonstationary Environments. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:3038-3051. [PMID: 37494171 DOI: 10.1109/tnnls.2023.3294788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
The design of real-time safety assessment (RTSA) approaches in nonstationary environments is meaningful to reduce the possibility of significant losses. However, several challenging problems are needed to be well considered. The performance of existing approaches will be negatively affected in the settings of imbalanced drifting streams. In this case, the model design with the incremental update should also be explored. Furthermore, the query strategy should also be well-designed. This article investigates a dynamic submodular-based learning strategy to address such issues. Specifically, an efficient incremental update procedure is designed with the structure of the broad learning system (BLS), which is beneficial to the detection of concept drift. Furthermore, a novel dynamic submodular-based annotation with an activation interval strategy is proposed to select valuable samples in imbalanced drifting streams. The lower bound of annotation value is also proven theoretically with a novel drift adaption mechanism. Numerous experiments are conducted with the realistic data of JiaoLong deep-sea manned submersible. The experimental results show that the proposed approach can achieve better assessment accuracy than typical existing approaches.
Collapse
|
13
|
Wang T, Zhang M, Zhang J, Ng WWY, Chen CLP. BASS: Broad Network Based on Localized Stochastic Sensitivity. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:1681-1695. [PMID: 35830397 DOI: 10.1109/tnnls.2022.3184846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The training of the standard broad learning system (BLS) concerns the optimization of its output weights via the minimization of both training mean square error (MSE) and a penalty term. However, it degrades the generalization capability and robustness of BLS when facing complex and noisy environments, especially when small perturbations or noise appear in input data. Therefore, this work proposes a broad network based on localized stochastic sensitivity (BASS) algorithm to tackle the issue of noise or input perturbations from a local perturbation perspective. The localized stochastic sensitivity (LSS) prompts an increase in the network's noise robustness by considering unseen samples located within a Q -neighborhood of training samples, which enhances the generalization capability of BASS with respect to noisy and perturbed data. Then, three incremental learning algorithms are derived to update BASS quickly when new samples arrive or the network is deemed to be expanded, without retraining the entire model. Due to the inherent superiorities of the LSS, extensive experimental results on 13 benchmark datasets show that BASS yields better accuracies on various regression and classification problems. For instance, BASS uses fewer parameters (12.6 million) to yield 1% higher Top-1 accuracy in comparison to AlexNet (60 million) on the large-scale ImageNet (ILSVRC2012) dataset.
Collapse
|
14
|
Zheng Y, Wang S, Chen B. Quantized minimum error entropy with fiducial points for robust regression. Neural Netw 2023; 168:405-418. [PMID: 37804744 DOI: 10.1016/j.neunet.2023.09.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 08/28/2023] [Accepted: 09/19/2023] [Indexed: 10/09/2023]
Abstract
Minimum error entropy with fiducial points (MEEF) has received a lot of attention, due to its outstanding performance to curb the negative influence caused by non-Gaussian noises in the fields of machine learning and signal processing. However, the estimate of the information potential of MEEF involves a double summation operator based on all available error samples, which can result in large computational burden in many practical scenarios. In this paper, an efficient quantization method is therefore adopted to represent the primary set of error samples with a smaller subset, generating a quantized MEEF (QMEEF). Some basic properties of QMEEF are presented and proved from theoretical perspectives. In addition, we have applied this new criterion to train a class of linear-in-parameters models, including the commonly used linear regression model, random vector functional link network, and broad learning system as special cases. Experimental results on various datasets are reported to demonstrate the desirable performance of the proposed methods to perform regression tasks with contaminated data.
Collapse
Affiliation(s)
- Yunfei Zheng
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Shiyuan Wang
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Badong Chen
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an 710049, China.
| |
Collapse
|
15
|
Liu L, Chen CLP, Wang Y. Modal Regression-Based Graph Representation for Noise Robust Face Hallucination. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2490-2502. [PMID: 34487500 DOI: 10.1109/tnnls.2021.3106773] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Manifold learning-based face hallucination technologies have been widely developed during the past decades. However, the conventional learning methods always become ineffective in noise environment due to the least-square regression, which usually generates distorted representations for noisy inputs they employed for error modeling. To solve this problem, in this article, we propose a modal regression-based graph representation (MRGR) model for noisy face hallucination. In MRGR, the modal regression-based function is incorporated into graph learning framework to improve the resolution of noisy face images. Specifically, the modal regression-induced metric is used instead of the least-square metric to regularize the encoding errors, which admits the MRGR to robust against noise with uncertain distribution. Moreover, a graph representation is learned from feature space to exploit the inherent typological structure of patch manifold for data representation, resulting in more accurate reconstruction coefficients. Besides, for noisy color face hallucination, the MRGR is extended into quaternion (MRGR-Q) space, where the abundant correlations among different color channels can be well preserved. Experimental results on both the grayscale and color face images demonstrate the superiority of MRGR and MRGR-Q compared with several state-of-the-art methods.
Collapse
|
16
|
Han S, Zhu K, Zhou M, Liu X. Evolutionary Weighted Broad Learning and Its Application to Fault Diagnosis in Self-Organizing Cellular Networks. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:3035-3047. [PMID: 35113791 DOI: 10.1109/tcyb.2021.3126711] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
As a novel neural network-based learning framework, a broad learning system (BLS) has attracted much attention due to its excellent performance on regression and balanced classification problems. However, it is found to be unsuitable for imbalanced data classification problems because it treats each class in an imbalanced dataset equally. To address this issue, this work proposes a weighted BLS (WBLS) in which the weight assigned to each class depends on the number of samples in it. In order to further boost its classification performance, an improved differential evolution algorithm is proposed to automatically optimize its parameters, including the ones in BLS and newly generated weights. We first optimize the parameters with a training dataset, and then apply them to WBLS on a test dataset. The experiments on 20 imbalanced classification problems have shown that our proposed method can achieve higher classification accuracy than the other methods in terms of several widely used performance metrics. Finally, it is applied to fault diagnosis in self-organizing cellular networks to further show its applicability to industrial application problems.
Collapse
|
17
|
Zheng Y, Wang S, Chen B. Identification of Hammerstein Systems with Random Fourier Features and Kernel Risk Sensitive Loss. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11191-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
|
18
|
Chu F, Wang G, Wang J, Chen CP, Wang X. Learning broad learning system with controllable sparsity through L0 regularization. Appl Soft Comput 2023. [DOI: 10.1016/j.asoc.2023.110068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
19
|
Correntropy based Elman neural network for dynamic data reconciliation with gross errors. J Taiwan Inst Chem Eng 2022. [DOI: 10.1016/j.jtice.2022.104568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
20
|
Guo Y, Zhao L, Shi Y, Zhang X, Du S, Wang F. Adaptive weighted robust iterative closest point. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.08.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
21
|
An effective and efficient broad-based ensemble learning model for moderate-large scale image recognition. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10263-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
22
|
Cauchy regularized broad learning system for noisy data regression. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.04.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
23
|
Broad stochastic configuration network for regression. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
24
|
H-BLS: a hierarchical broad learning system with deep and sparse feature learning. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03498-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
25
|
Yao L, Wong PK, Zhao B, Wang Z, Lei L, Wang X, Hu Y. Cost-Sensitive Broad Learning System for Imbalanced Classification and Its Medical Application. MATHEMATICS 2022; 10:829. [DOI: 10.3390/math10050829] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
As an effective and efficient discriminative learning method, the broad learning system (BLS) has received increasing attention due to its outstanding performance without large computational resources. The standard BLS is derived under the minimum mean square error (MMSE) criterion, while MMSE is with poor performance when dealing with imbalanced data. However, imbalanced data are widely encountered in real-world applications. To address this issue, a novel cost-sensitive BLS algorithm (CS-BLS) is proposed. In the CS-BLS, many variations can be adopted, and CS-BLS with weighted cross-entropy is analyzed in this paper. Weighted penalty factors are used in CS-BLS to constrain the contribution of each sample in different classes. The samples in minor classes are allocated higher weights to increase their contributions. Four different weight calculation methods are adopted to the CS-BLS, and thus, four CS-BLS methods are proposed: Log-CS-BLS, Lin-CS-BLS, Sqr-CS-BLS, and EN-CS-BLS. Experiments based on artificially imbalanced datasets of MNIST and small NORB are firstly conducted and compared with the standard BLS. The results show that the proposed CS-BLS methods have better generalization and robustness than the standard BLS. Then, experiments on a real ultrasound breast image dataset are conducted, and the results demonstrate that the proposed CS-BLS methods are effective in actual medical diagnosis.
Collapse
Affiliation(s)
- Liang Yao
- Department of Electromechanical Engineering, University of Macau, Taipa, Macau 999078, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Pak Kin Wong
- Department of Electromechanical Engineering, University of Macau, Taipa, Macau 999078, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ziwen Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Long Lei
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaozheng Wang
- Department of Electromechanical Engineering, University of Macau, Taipa, Macau 999078, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Pazhou Lab, Guangzhou 510320, China
| |
Collapse
|
26
|
Continuous Control Strategy of Planar 3-Linkage Underactuated Manipulator Based on Broad Neural Network. ACTUATORS 2021. [DOI: 10.3390/act10100249] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
For the position control of a planar 3-linkage underactuated manipulator (PTUM) with a passive first linkage, a continuous control strategy is developed in this paper. In particular, a broad neural network (BNN)-based model is first established to accurately describe the motion coupling relationship between the passive linkage and the second linkage. Based on this model, by using the particle swarm optimization algorithm, the target angles of all linkages are calculated combining the start states of all linkages and the target position of the PTUM. Then, the target angles of the active linkages are directly achieved by their respective actuators, and that of the passive linkage is also achieved by the rotation of the second linkage. By carrying out several experiments, the effectiveness of the above strategy is verified.
Collapse
|