1
|
Wang X, Zhao W, Tang JN, Dai ZB, Feng YN. Evolution algorithm with adaptive genetic operator and dynamic scoring mechanism for large-scale sparse many-objective optimization. Sci Rep 2025; 15:9267. [PMID: 40102468 PMCID: PMC11920420 DOI: 10.1038/s41598-025-91245-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 02/19/2025] [Indexed: 03/20/2025] Open
Abstract
Large-scale sparse multi-objective optimization problems are prevalent in numerous real-world scenarios, such as neural network training, sparse regression, pattern mining and critical node detection, where Pareto optimal solutions exhibit sparse characteristics. Ordinary large-scale multi-objective optimization algorithms implement undifferentiated update operations on all decision variables, which reduces search efficiency, so the Pareto solutions obtained by the algorithms fail to meet the sparsity requirements. SparseEA is capable of generating sparse solutions and calculating scores for each decision variable, which serves as a basis for crossover and mutation in subsequent evolutionary process. However, the scores remain unchanged in iterative process, which restricts the sparse optimization ability of the algorithm. To solve the problem, this paper proposes an evolution algorithm with the adaptive genetic operator and dynamic scoring mechanism for large-scale sparse many-objective optimization (SparseEA-AGDS). Within the evolutionary algorithm for large-scale Sparse (SparseEA) framework, the proposed adaptive genetic operator and dynamic scoring mechanism adaptively adjust the probability of cross-mutation operations based on the fluctuating non-dominated layer levels of individuals, concurrently updating the scores of decision variables to encourage superior individuals to gain additional genetic opportunities. Moreover, to augment the algorithm's capability to handle many-objective problems, a reference point-based environmental selection strategy is incorporated. Comparative experimental results demonstrate that the SparseEA-AGDS algorithm outperforms five other algorithms in terms of convergence and diversity on the SMOP benchmark problem set with many-objective and also yields superior sparse Pareto optimal solutions.
Collapse
Affiliation(s)
- Xia Wang
- School of Electrical and Information Technology , Yunnan Minzu University, Kunming, 650504, China.
- Yunnan Key Laboratory of Unmanned Autonomous System , Yunnan Minzu University, Kunming, 650504, China.
| | - Wei Zhao
- School of Electrical and Information Technology , Yunnan Minzu University, Kunming, 650504, China
- Yunnan Key Laboratory of Unmanned Autonomous System , Yunnan Minzu University, Kunming, 650504, China
| | - Jia-Ning Tang
- School of Electrical and Information Technology , Yunnan Minzu University, Kunming, 650504, China.
- Yunnan Key Laboratory of Unmanned Autonomous System , Yunnan Minzu University, Kunming, 650504, China.
| | - Zhong-Bin Dai
- Nanjing Branch of China Telecom Co., Ltd, Nanjing, 210000, China
| | - Ya-Ning Feng
- School of Electrical and Information Technology , Yunnan Minzu University, Kunming, 650504, China
- Yunnan Key Laboratory of Unmanned Autonomous System , Yunnan Minzu University, Kunming, 650504, China
| |
Collapse
|
2
|
Jiao R, Xue B, Zhang M. A Tri-Objective Method for Bi-Objective Feature Selection in Classification. EVOLUTIONARY COMPUTATION 2024; 32:217-248. [PMID: 37463437 DOI: 10.1162/evco_a_00339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 06/29/2023] [Indexed: 07/20/2023]
Abstract
Minimizing the number of selected features and maximizing the classification performance are two main objectives in feature selection, which can be formulated as a bi-objective optimization problem. Due to the complex interactions between features, a solution (i.e., feature subset) with poor objective values does not mean that all the features it selects are useless, as some of them combined with other complementary features can greatly improve the classification performance. Thus, it is necessary to consider not only the performance of feature subsets in the objective space, but also their differences in the search space, to explore more promising feature combinations. To this end, this paper proposes a tri-objective method for bi-objective feature selection in classification, which solves a bi-objective feature selection problem as a tri-objective problem by considering the diversity (differences) between feature subsets in the search space as the third objective. The selection based on the converted tri-objective method can maintain a balance between minimizing the number of selected features, maximizing the classification performance, and exploring more promising feature subsets. Furthermore, a novel initialization strategy and an offspring reproduction operator are proposed to promote the diversity of feature subsets in the objective space and improve the search ability, respectively. The proposed algorithm is compared with five multiobjective-based feature selection methods, six typical feature selection methods, and two peer methods with diversity as a helper objective. Experimental results on 20 real-world classification datasets suggest that the proposed method outperforms the compared methods in most scenarios.
Collapse
Affiliation(s)
- Ruwang Jiao
- School of Engineering and Computer Science, Victoria University of Wellington, Wellington, 6140, New Zealand
| | - Bing Xue
- School of Engineering and Computer Science, Victoria University of Wellington, Wellington, 6140, New Zealand
| | - Mengjie Zhang
- School of Engineering and Computer Science, Victoria University of Wellington, Wellington, 6140, New Zealand
| |
Collapse
|
3
|
Li L, Li Y, Lin Q, Liu S, Zhou J, Ming Z, Coello Coello CA. Neural Net-Enhanced Competitive Swarm Optimizer for Large-Scale Multiobjective Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3502-3515. [PMID: 37486827 DOI: 10.1109/tcyb.2023.3287596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
The competitive swarm optimizer (CSO) classifies swarm particles into loser and winner particles and then uses the winner particles to efficiently guide the search of the loser particles. This approach has very promising performance in solving large-scale multiobjective optimization problems (LMOPs). However, most studies of CSOs ignore the evolution of the winner particles, although their quality is very important for the final optimization performance. Aiming to fill this research gap, this article proposes a new neural net-enhanced CSO for solving LMOPs, called NN-CSO, which not only guides the loser particles via the original CSO strategy, but also applies our trained neural network (NN) model to evolve winner particles. First, the swarm particles are classified into winner and loser particles by the pairwise competition. Then, the loser particles and winner particles are, respectively, treated as the input and desired output to train the NN model, which tries to learn promising evolutionary dynamics by driving the loser particles toward the winners. Finally, when model training is complete, the winner particles are evolved by the well-trained NN model, while the loser particles are still guided by the winner particles to maintain the search pattern of CSOs. To evaluate the performance of our designed NN-CSO, several LMOPs with up to ten objectives and 1000 decision variables are adopted, and the experimental results show that our designed NN model can significantly improve the performance of CSOs and shows some advantages over several state-of-the-art large-scale multiobjective evolutionary algorithms as well as over model-based evolutionary algorithms.
Collapse
|
4
|
Liu T, Wu Y, Ye A, Cao L, Cao Y. Two-stage sparse multi-objective evolutionary algorithm for channel selection optimization in BCIs. Front Hum Neurosci 2024; 18:1400077. [PMID: 38841120 PMCID: PMC11150693 DOI: 10.3389/fnhum.2024.1400077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 05/08/2024] [Indexed: 06/07/2024] Open
Abstract
Background Channel selection has become the pivotal issue affecting the widespread application of non-invasive brain-computer interface systems in the real world. However, constructing suitable multi-objective problem models alongside effective search strategies stands out as a critical factor that impacts the performance of multi-objective channel selection algorithms. This paper presents a two-stage sparse multi-objective evolutionary algorithm (TS-MOEA) to address channel selection problems in brain-computer interface systems. Methods In TS-MOEA, a two-stage framework, which consists of the early and late stages, is adopted to prevent the algorithm from stagnating. Furthermore, The two stages concentrate on different multi-objective problem models, thereby balancing convergence and population diversity in TS-MOEA. Inspired by the sparsity of the correlation matrix of channels, a sparse initialization operator, which uses a domain-knowledge-based score assignment strategy for decision variables, is introduced to generate the initial population. Moreover, a Score-based mutation operator is utilized to enhance the search efficiency of TS-MOEA. Results The performance of TS-MOEA and five other state-of-the-art multi-objective algorithms has been evaluated using a 62-channel EEG-based brain-computer interface system for fatigue detection tasks, and the results demonstrated the effectiveness of TS-MOEA. Conclusion The proposed two-stage framework can help TS-MOEA escape stagnation and facilitate a balance between diversity and convergence. Integrating the sparsity of the correlation matrix of channels and the problem-domain knowledge can effectively reduce the computational complexity of TS-MOEA while enhancing its optimization efficiency.
Collapse
Affiliation(s)
- Tianyu Liu
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - Yu Wu
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - An Ye
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - Lei Cao
- School of Information Engineering, Shanghai Maritime University, Shanghai, China
| | - Yongnian Cao
- Tiktok Incorporation, San Jose, CA, United States
| |
Collapse
|
5
|
Zhang C, Xue Y, Neri F, Cai X, Slowik A. Multi-Objective Self-Adaptive Particle Swarm Optimization for Large-Scale Feature Selection in Classification. Int J Neural Syst 2024; 34:2450014. [PMID: 38352979 DOI: 10.1142/s012906572450014x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Feature selection (FS) is recognized for its role in enhancing the performance of learning algorithms, especially for high-dimensional datasets. In recent times, FS has been framed as a multi-objective optimization problem, leading to the application of various multi-objective evolutionary algorithms (MOEAs) to address it. However, the solution space expands exponentially with the dataset's dimensionality. Simultaneously, the extensive search space often results in numerous local optimal solutions due to a large proportion of unrelated and redundant features [H. Adeli and H. S. Park, Fully automated design of super-high-rise building structures by a hybrid ai model on a massively parallel machine, AI Mag. 17 (1996) 87-93]. Consequently, existing MOEAs struggle with local optima stagnation, particularly in large-scale multi-objective FS problems (LSMOFSPs). Different LSMOFSPs generally exhibit unique characteristics, yet most existing MOEAs rely on a single candidate solution generation strategy (CSGS), which may be less efficient for diverse LSMOFSPs [H. S. Park and H. Adeli, Distributed neural dynamics algorithms for optimization of large steel structures, J. Struct. Eng. ASCE 123 (1997) 880-888; M. Aldwaik and H. Adeli, Advances in optimization of highrise building structures, Struct. Multidiscip. Optim. 50 (2014) 899-919; E. G. González, J. R. Villar, Q. Tan, J. Sedano and C. Chira, An efficient multi-robot path planning solution using a* and coevolutionary algorithms, Integr. Comput. Aided Eng. 30 (2022) 41-52]. Moreover, selecting an appropriate MOEA and determining its corresponding parameter values for a specified LSMOFSP is time-consuming. To address these challenges, a multi-objective self-adaptive particle swarm optimization (MOSaPSO) algorithm is proposed, combined with a rapid nondominated sorting approach. MOSaPSO employs a self-adaptive mechanism, along with five modified efficient CSGSs, to generate new solutions. Experiments were conducted on ten datasets, and the results demonstrate that the number of features is effectively reduced by MOSaPSO while lowering the classification error rate. Furthermore, superior performance is observed in comparison to its counterparts on both the training and test sets, with advantages becoming increasingly evident as the dimensionality increases.
Collapse
Affiliation(s)
- Chenyi Zhang
- School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, P. R. China
| | - Yu Xue
- School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, P. R. China
| | - Ferrante Neri
- NICE Research Group, School of Computer Science and Electronic Engineering, University of Surrey Guildford, GU2 7XS, UK
| | - Xu Cai
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, P. R. China
| | - Adam Slowik
- Department of Electronics and Computer Science, Koszalin University of Technology, Koszalin 75-453, Poland
| |
Collapse
|
6
|
Jiao R, Xue B, Zhang M. Benefiting From Single-Objective Feature Selection to Multiobjective Feature Selection: A Multiform Approach. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7773-7786. [PMID: 36346857 DOI: 10.1109/tcyb.2022.3218345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Evolutionary multiobjective feature selection (FS) has gained increasing attention in recent years. However, it still faces some challenges, for example, the frequently appeared duplicated solutions in either the search space or the objective space lead to the diversity loss of the population, and the huge search space results in the low search efficiency of the algorithm. Minimizing the number of selected features and maximizing the classification performance are two major objectives in FS. Usually, the fitness function of a single-objective FS problem linearly aggregates these two objectives through a weighted sum method. Given a predefined direction (weight) vector, the single-objective FS task can explore the specified direction or area extensively. Different direction vectors result in different search directions in the objective space. Motivated by this, this article proposes a multiform framework, which solves a multiobjective FS task combined with its auxiliary single-objective FS tasks in a multitask environment. By setting different direction vectors, promising feature subsets from single-objective FS tasks can be utilized, to boost the evolutionary search of the multiobjective FS task. By comparing with five classical and state-of-the-art multiobjective evolutionary algorithms, as well as four well-performing FS algorithms, the effectiveness and efficiency of the proposed method are verified via extensive experiments on 18 classification datasets. Furthermore, the effectiveness of the proposed method is also investigated in a noisy environment.
Collapse
|
7
|
Zhou X, Cai X, Zhang H, Zhang Z, Jin T, Chen H, Deng W. Multi-strategy competitive-cooperative co-evolutionary algorithm and its application. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.03.142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
8
|
Ding Z, Cao L, Chen L, Sun D, Zhang X, Tao Z. Large-scale multimodal multiobjective evolutionary optimization based on hybrid hierarchical clustering. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2023]
|
9
|
Ren J, Qiu F, Hu H. Multiple sparse detection-based evolutionary algorithm for large-scale sparse multiobjective optimization problems. COMPLEX INTELL SYST 2023. [DOI: 10.1007/s40747-022-00963-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
AbstractSparse multiobjective optimization problems are common in practical applications. Such problems are characterized by large-scale decision variables and sparse optimal solutions. General large-scale multiobjective optimization problems (LSMOPs) have been extensively studied for many years. They can be well solved by many excellent custom algorithms. However, when these algorithms are used to deal with sparse LSMOPs, they often encounter difficulties because the sparse nature of the problem is not considered. Therefore, aiming at sparse LSMOPs, an algorithm based on multiple sparse detection is proposed in this paper. The algorithm applies an adaptive sparse genetic operator that can generate sparse solutions by detecting the sparsity of individuals. To improve the deficiency of sparse detection caused by local detection, an enhanced sparse detection (ESD) strategy is proposed in this paper. The strategy uses binary coefficient vectors to integrate the masks of nondominated solutions. Essentially, the mask is globally and deeply optimized by coefficient vectors to enhance the sparsity of the solutions. In addition, the algorithm adopts an improved weighted optimization strategy to fully optimize the key nonzero variables to balance exploration and optimization. Finally, the proposed algorithm is named MOEA-ESD and is compared to the current state-of-the-art algorithm to verify its effectiveness.
Collapse
|
10
|
Yin F, Zhang H, Qi A, Zhu Z, Yang L, Wen G, Xie W. An exploratory study of CT radiomics using differential network feature selection for WHO/ISUP grading and progression-free survival prediction of clear cell renal cell carcinoma. Front Oncol 2022; 12:979613. [DOI: 10.3389/fonc.2022.979613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 10/11/2022] [Indexed: 11/13/2022] Open
Abstract
ObjectivesTo explore the feasibility of predicting the World Health Organization/International Society of Urological Pathology (WHO/ISUP) grade and progression-free survival (PFS) of clear cell renal cell cancer (ccRCC) using the radiomics features (RFs) based on the differential network feature selection (FS) method using the maximum-entropy probability model (MEPM).Methods175 ccRCC patients were divided into a training set (125) and a test set (50). The non-contrast phase (NCP), cortico-medullary phase, nephrographic phase, excretory phase phases, and all-phase WHO/ISUP grade prediction models were constructed based on a new differential network FS method using the MEPM. The diagnostic performance of the best phase model was compared with the other state-of-the-art machine learning models and the clinical models. The RFs of the best phase model were used for survival analysis and visualized using risk scores and nomograms. The performance of the above models was tested in both cross-validated and independent validation and checked by the Hosmer-Lemeshow test.ResultsThe NCP RFs model was the best phase model, with an AUC of 0.89 in the test set, and performed superior to other machine learning models and the clinical models (all p <0.05). Kaplan-Meier survival analysis, univariate and multivariate cox regression results, and risk score analyses showed the NCP RFs could predict PFS well (almost all p < 0.05). The nomogram model incorporated the best two RFs and showed good discrimination, a C-index of 0.71 and 0.69 in the training and test set, and good calibration.ConclusionThe NCP CT-based RFs selected by differential network FS could predict the WHO/ISUP grade and PFS of RCC.
Collapse
|
11
|
Su Y, Jin Z, Tian Y, Zhang X, Tan KC. Comparing the Performance of Evolutionary Algorithms for Sparse Multi-Objective Optimization via a Comprehensive Indicator [Research Frontier]. IEEE COMPUT INTELL M 2022. [DOI: 10.1109/mci.2022.3180913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
| | | | | | | | - Kay Chen Tan
- The Hong Kong Polytechnic University, Hong Kong SAR
| |
Collapse
|
12
|
Chen ZG, Zhan ZH, Kwong S, Zhang J. Evolutionary Computation for Intelligent Transportation in Smart Cities: A Survey [Review Article]. IEEE COMPUT INTELL M 2022. [DOI: 10.1109/mci.2022.3155330] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
13
|
Cao R, Si L, Li X, Guang Y, Wang C, Tian Y, Pei X, Zhang X. A conjugate gradient-assisted multi-objective evolutionary algorithm for fluence map optimization in radiotherapy treatment. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00697-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractIntensity-modulated radiotherapy (IMRT) is one of the most applied techniques for cancer radiotherapy treatment. The fluence map optimization is an essential part of IMRT plan designing, which has a significant impact on the radiotherapy treatment effect. In fact, the treatment planing of IMRT is an inverse multi-objective optimization problem. Existing approaches of solving the fluence map optimization problem (FMOP) obtain a satisfied treatment plan via trying different coupling weights, the optimization process needs to be conducted many times and the coupling weight setting is completely based on the experience of a radiation physicist. For fast obtaining diverse high-quality radiotherapy plans, this paper formulates the FMOP into a three-objective optimization problem, and proposes a conjugate gradient-assisted multi-objective evolutionary algorithm (CG-MOEA) to solve it. The proposed algorithm does not need to set the coupling weights and can produce the diverse radiotherapy plans within a single run. Moreover, the convergence speed is further accelerated by an adaptive local search strategy based on the conjugate-gradient method. Compared with five state-of-the-art multi-objective evolutionary algorithms (MOEAs), the proposed CG-MOEA can obtain the best hypervolume (HV) values and dose–volume histogram (DVH) performance on five clinical cases in cancer radiotherapy. Moreover, the proposed algorithm not only obtains the more optimal solution than traditional method used to solve the FMOP, but also can find diverse Pareto solution set which can be provided to radiation physicist to select the best treatment plan. The proposed algorithm outperforms dose-volume histogram state-of-the-art multi-objective evolutionary algorithms and traditional method for FMOP on five clinical cases in cancer radiotherapy.
Collapse
|
14
|
Abstract
AbstractSparse large-scale multi-objective optimization problems (LSMOPs) widely exist in real-world applications, which have the properties of involving a large number of decision variables and sparse Pareto optimal solutions, i.e., most decision variables of these solutions are zero. In recent years, sparse LSMOPs have attracted increasing attentions in the evolutionary computation community. However, all the recently tailored algorithms for sparse LSMOPs put the sparsity detection and maintenance in the first place, where the nonzero variables can hardly be optimized sufficiently within a limited budget of function evaluations. To address this issue, this paper proposes to enhance the connection between real variables and binary variables within the two-layer encoding scheme with the assistance of variable grouping techniques. In this way, more efforts can be devoted to the real part of nonzero variables, achieving the balance between sparsity maintenance and variable optimization. According to the experimental results on eight benchmark problems and three real-world applications, the proposed algorithm is superior over existing state-of-the-art evolutionary algorithms for sparse LSMOPs.
Collapse
|