1
|
Wang X, Zhao W, Tang JN, Dai ZB, Feng YN. Evolution algorithm with adaptive genetic operator and dynamic scoring mechanism for large-scale sparse many-objective optimization. Sci Rep 2025; 15:9267. [PMID: 40102468 PMCID: PMC11920420 DOI: 10.1038/s41598-025-91245-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 02/19/2025] [Indexed: 03/20/2025] Open
Abstract
Large-scale sparse multi-objective optimization problems are prevalent in numerous real-world scenarios, such as neural network training, sparse regression, pattern mining and critical node detection, where Pareto optimal solutions exhibit sparse characteristics. Ordinary large-scale multi-objective optimization algorithms implement undifferentiated update operations on all decision variables, which reduces search efficiency, so the Pareto solutions obtained by the algorithms fail to meet the sparsity requirements. SparseEA is capable of generating sparse solutions and calculating scores for each decision variable, which serves as a basis for crossover and mutation in subsequent evolutionary process. However, the scores remain unchanged in iterative process, which restricts the sparse optimization ability of the algorithm. To solve the problem, this paper proposes an evolution algorithm with the adaptive genetic operator and dynamic scoring mechanism for large-scale sparse many-objective optimization (SparseEA-AGDS). Within the evolutionary algorithm for large-scale Sparse (SparseEA) framework, the proposed adaptive genetic operator and dynamic scoring mechanism adaptively adjust the probability of cross-mutation operations based on the fluctuating non-dominated layer levels of individuals, concurrently updating the scores of decision variables to encourage superior individuals to gain additional genetic opportunities. Moreover, to augment the algorithm's capability to handle many-objective problems, a reference point-based environmental selection strategy is incorporated. Comparative experimental results demonstrate that the SparseEA-AGDS algorithm outperforms five other algorithms in terms of convergence and diversity on the SMOP benchmark problem set with many-objective and also yields superior sparse Pareto optimal solutions.
Collapse
Affiliation(s)
- Xia Wang
- School of Electrical and Information Technology , Yunnan Minzu University, Kunming, 650504, China.
- Yunnan Key Laboratory of Unmanned Autonomous System , Yunnan Minzu University, Kunming, 650504, China.
| | - Wei Zhao
- School of Electrical and Information Technology , Yunnan Minzu University, Kunming, 650504, China
- Yunnan Key Laboratory of Unmanned Autonomous System , Yunnan Minzu University, Kunming, 650504, China
| | - Jia-Ning Tang
- School of Electrical and Information Technology , Yunnan Minzu University, Kunming, 650504, China.
- Yunnan Key Laboratory of Unmanned Autonomous System , Yunnan Minzu University, Kunming, 650504, China.
| | - Zhong-Bin Dai
- Nanjing Branch of China Telecom Co., Ltd, Nanjing, 210000, China
| | - Ya-Ning Feng
- School of Electrical and Information Technology , Yunnan Minzu University, Kunming, 650504, China
- Yunnan Key Laboratory of Unmanned Autonomous System , Yunnan Minzu University, Kunming, 650504, China
| |
Collapse
|
2
|
Zhong R, Wang Z, Hussien AG, Houssein EH, Al-Shourbaji I, Elseify MA, Yu J. Success History Adaptive Competitive Swarm Optimizer with Linear Population Reduction: Performance benchmarking and application in eye disease detection. Comput Biol Med 2025; 186:109587. [PMID: 39753027 DOI: 10.1016/j.compbiomed.2024.109587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2024] [Revised: 12/03/2024] [Accepted: 12/14/2024] [Indexed: 02/20/2025]
Abstract
Eye disease detection has achieved significant advancements thanks to artificial intelligence (AI) techniques. However, the construction of high-accuracy predictive models still faces challenges, and one reason is the deficiency of the optimizer. This paper presents an efficient optimizer named Success History Adaptive Competitive Swarm Optimizer with Linear Population Reduction (L-SHACSO). Inspired by the effective success history adaptation scheme and linear population reduction strategy in Differential Evolution (DE), we introduce these techniques into CSO to enable the automatic and intelligent adjustment of hyper-parameters during optimization thereby balancing exploration and exploitation across different phases. To thoroughly investigate the performance of L-SHACSO, we conduct extensive numerical experiments on CEC2017, CEC2020, CEC2022, and eight engineering problems. State-of-the-art optimizers including jSO and L-SHADE-cnEpSin and recently proposed metaheuristic algorithms (MAs) such as RIME and the Parrot Optimizer (PO) are employed as competitors. Experimental results confirm the superiority of L-SHACSO across various optimization tasks. Furthermore, we integrate L-SHACSO into DenseNet and Extreme Learning Machine (ELM) and propose DenseNet-L-SHACSO-ELM for eye disease detection, where the features extracted by the pre-trained DenseNet are fed into L-SHACSO-optimized ELM for classification. Experiments on public datasets confirm the feasibility and effectiveness of our proposed model, which has great potential in real-world scenarios. The source code of this research is available at https://github.com/RuiZhong961230/L-SHACSO.
Collapse
Affiliation(s)
- Rui Zhong
- Information Initiative Center, Hokkaido University, Sapporo, Japan.
| | - Zhongmin Wang
- College of Tropical Crops, Yunnan Agricultural University, Yunnan, China.
| | - Abdelazim G Hussien
- Department of Computer and Information Science, Linköping University, Linköping, Sweden; Faculty of Science, Fayoum University, Faiyum, Egypt.
| | - Essam H Houssein
- Faculty of Computers and Information, Minia University, Minia, Egypt.
| | - Ibrahim Al-Shourbaji
- Department of Electrical and Electronics Engineering, Jazan University, Jazan, Saudi Arabia; Department of Computer Science, University of Hertfordshire, Hatfield, UK.
| | - Mohamed A Elseify
- Department of Electrical Engineering, Faculty of Engineering, Al-Azhar University, Qena, Egypt.
| | - Jun Yu
- Institute of Science and Technology, Niigata University, Niigata, Japan.
| |
Collapse
|
3
|
Xiong L, Chen D, Zou F, Ge F, Liu F. A multi-population multi-stage adaptive weighted large-scale multi-objective optimization algorithm framework. Sci Rep 2024; 14:14036. [PMID: 38890399 PMCID: PMC11637062 DOI: 10.1038/s41598-024-64570-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 06/11/2024] [Indexed: 06/20/2024] Open
Abstract
Weighted optimization framework (WOF) achieves variable dimensionality reduction by grouping variables and optimizing weights, playing an important role in large-scale multi-objective optimization problems. However, because of possible problems such as duplicate weight vectors in the selection process and loss of population diversity, the algorithm is susceptible to local optimization. Therefore, this paper develops an algorithm framework called multi-population multi-stage adaptive weighted optimization (MPSOF) to improve the performance of WOF in two aspects. First, the method of using multi-population is employed to address the issue of insufficient algorithmic diversity, while simultaneously reducing the likelihood of converging towards local optima. Secondly, a processing stage is incorporated into MPSOF, where a certain number of individuals are adaptively selected for updating based on the weight information and evolutionary status of different subpopulations, targeting different types of weights. This approach alleviates the impact of repetitive weights on the diversity of newly generated individuals, avoids the drawback of easily converging to local optima when using a single type of weight for updating, and effectively balances the diversity and convergence of subpopulations. Experiments of three types designed on several typical function sets demonstrate that MPSOF exceeds the comparison algorithms in the three metrics for Inverse Generation Distance, Hypervolume and Spacing.
Collapse
Affiliation(s)
- Lixue Xiong
- School of Physics and Electronic Information, Huaibei Normal University, Huaibei, 235000, China
- Anhui Province Key Laboratory of Intelligent Computing and Applications, Huaibei Normal University, Huaibei, 235000, China
| | - Debao Chen
- School of Physics and Electronic Information, Huaibei Normal University, Huaibei, 235000, China.
- Anhui Province Key Laboratory of Intelligent Computing and Applications, Huaibei Normal University, Huaibei, 235000, China.
- School of Information Engineering, Suzhou University, Suzhou, 234000, China.
| | - Feng Zou
- School of Physics and Electronic Information, Huaibei Normal University, Huaibei, 235000, China
- Anhui Province Key Laboratory of Intelligent Computing and Applications, Huaibei Normal University, Huaibei, 235000, China
| | - Fangzhen Ge
- School of Computer Science and Technology, Huaibei Normal University, Huaibei, 235000, China
- Anhui Engineering Research Center for Intelligent Computing and Application on Cognitive Behavior (ICACB), Huaibei, 235000, China
| | - Fuqiang Liu
- School of Computer Science and Technology, Huaibei Normal University, Huaibei, 235000, China
- School of Electronics and Information Engineering, TongJi University, Shanghai, 200092, China
| |
Collapse
|
4
|
Li L, Li Y, Lin Q, Liu S, Zhou J, Ming Z, Coello Coello CA. Neural Net-Enhanced Competitive Swarm Optimizer for Large-Scale Multiobjective Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3502-3515. [PMID: 37486827 DOI: 10.1109/tcyb.2023.3287596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
The competitive swarm optimizer (CSO) classifies swarm particles into loser and winner particles and then uses the winner particles to efficiently guide the search of the loser particles. This approach has very promising performance in solving large-scale multiobjective optimization problems (LMOPs). However, most studies of CSOs ignore the evolution of the winner particles, although their quality is very important for the final optimization performance. Aiming to fill this research gap, this article proposes a new neural net-enhanced CSO for solving LMOPs, called NN-CSO, which not only guides the loser particles via the original CSO strategy, but also applies our trained neural network (NN) model to evolve winner particles. First, the swarm particles are classified into winner and loser particles by the pairwise competition. Then, the loser particles and winner particles are, respectively, treated as the input and desired output to train the NN model, which tries to learn promising evolutionary dynamics by driving the loser particles toward the winners. Finally, when model training is complete, the winner particles are evolved by the well-trained NN model, while the loser particles are still guided by the winner particles to maintain the search pattern of CSOs. To evaluate the performance of our designed NN-CSO, several LMOPs with up to ten objectives and 1000 decision variables are adopted, and the experimental results show that our designed NN model can significantly improve the performance of CSOs and shows some advantages over several state-of-the-art large-scale multiobjective evolutionary algorithms as well as over model-based evolutionary algorithms.
Collapse
|
5
|
Chen Y, Fang B, Meng F, Luo J, Luo X. Competitive Swarm Optimized SVD Clutter Filtering for Ultrafast Power Doppler Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:459-473. [PMID: 38319765 DOI: 10.1109/tuffc.2024.3362967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Ultrafast power Doppler imaging (uPDI) can significantly increase the sensitivity of resolving small vascular paths in ultrasound. While clutter filtering is a fundamental and essential method to realize uPDI, it commonly uses singular value decomposition (SVD) to suppress clutter signals and noise. However, current SVD-based clutter filters using two cutoffs cannot ensure sufficient separation of tissue, blood, and noise in uPDI. This article proposes a new competitive swarm-optimized SVD clutter filter to improve the quality of uPDI. Specifically, without using two cutoffs, such a new filter introduces competitive swarm optimization (CSO) to search for the counterparts of blood signals in each singular value. We validate the CSO-SVD clutter filter on public in vivo datasets. The experimental results demonstrate that our method can achieve higher contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and blood-to-clutter ratio (BCR) than the state-of-the-art SVD-based clutter filters, showing a better balance between suppressing clutter signals and preserving blood signals. Particularly, our CSO-SVD clutter filter improves CNR by 0.99 ± 0.08 dB, SNR by 0.79 ± 0.08 dB, and BCR by 1.95 ± 0.03 dB when comparing a spatial-similarity-based SVD clutter filter in the in vivo dataset of rat brain bolus.
Collapse
|
6
|
Liu N, Pan JS, Liu G, Fu M, Kong Y, Hu P. A Multi-Objective Sine Cosine Algorithm Based on a Competitive Mechanism and Its Application in Engineering Design Problems. Biomimetics (Basel) 2024; 9:115. [PMID: 38392161 PMCID: PMC10887415 DOI: 10.3390/biomimetics9020115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 02/07/2024] [Accepted: 02/09/2024] [Indexed: 02/24/2024] Open
Abstract
There are a lot of multi-objective optimization problems (MOPs) in the real world, and many multi-objective evolutionary algorithms (MOEAs) have been presented to solve MOPs. However, obtaining non-dominated solutions that trade off convergence and diversity remains a major challenge for a MOEA. To solve this problem, this paper designs an efficient multi-objective sine cosine algorithm based on a competitive mechanism (CMOSCA). In the CMOSCA, the ranking relies on non-dominated sorting, and the crowding distance rank is utilized to choose the outstanding agents, which are employed to guide the evolution of the SCA. Furthermore, a competitive mechanism stemming from the shift-based density estimation approach is adopted to devise a new position updating operator for creating offspring agents. In each competition, two agents are randomly selected from the outstanding agents, and the winner of the competition is integrated into the position update scheme of the SCA. The performance of our proposed CMOSCA was first verified on three benchmark suites (i.e., DTLZ, WFG, and ZDT) with diversity characteristics and compared with several MOEAs. The experimental results indicated that the CMOSCA can obtain a Pareto-optimal front with better convergence and diversity. Finally, the CMOSCA was applied to deal with several engineering design problems taken from the literature, and the statistical results demonstrated that the CMOSCA is an efficient and effective approach for engineering design problems.
Collapse
Affiliation(s)
- Nengxian Liu
- College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China
| | - Jeng-Shyang Pan
- School of Artificial Intelligence, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Genggeng Liu
- College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China
| | - Mingjian Fu
- College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China
| | - Yanyan Kong
- School of Information Science and Engineering, ZheJiang Sci-Tech University, Hangzhou 310018, China
| | - Pei Hu
- School of Computer and Software, Nanyang Institute of Technology, Nanyang 473004, China
| |
Collapse
|
7
|
Zhang K, Shen C, Yen GG. Multipopulation-Based Differential Evolution for Large-Scale Many-Objective Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7596-7608. [PMID: 35731754 DOI: 10.1109/tcyb.2022.3178929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In recent years, numerous efficient many-objective optimization evolutionary algorithms have been proposed to find well-converged and well-distributed nondominated optimal solutions. However, their scalability performance may deteriorate drastically to solve large-scale many-objective optimization problems (LSMaOPs). Encountering high-dimensional solution space with more than 100 decision variables, some of them may lose diversity and trap into local optima, while others may achieve poor convergence performance. This article proposes a multipopulation-based differential evolution algorithm, called LSMaODE, which can solve LSMaOPs efficiently and effectively. In order to exploit and explore the exponential decision space, the proposed algorithm divides the population into two groups of subpopulations, which are optimized with different strategies. First, the randomized coordinate descent technique is applied to 10% of individuals to exploit the decision variables independently. This subpopulation maintains diversity in the decision space to avoid premature convergence into local optimum. Second, the remaining 90% of individuals are optimized with the nondominated guided random interpolation strategy, which interpolates individual among three nondominated solutions randomly. The strategy can guide the population convergent toward the nondominated solutions quickly, meanwhile, maintain good distribution in objective space. Finally, the proposed LSMaODE is evaluated on the LSMOP test suites from the scalability in both decision and objective dimensions. The performance is compared against five state-of-the-art large-scale many-objective evolutionary algorithms. The experimental results show that LSMaODE provides highly competitive performance.
Collapse
|
8
|
Ji J, Zhao J, Lin Q, Tan KC. Competitive Decomposition-Based Multiobjective Architecture Search for the Dendritic Neural Model. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6829-6842. [PMID: 35476557 DOI: 10.1109/tcyb.2022.3165374] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The dendritic neural model (DNM) is computationally faster than other machine-learning techniques, because its architecture can be implemented by using logic circuits and its calculations can be performed entirely in binary form. To further improve the computational speed, a straightforward approach is to generate a more concise architecture for the DNM. Actually, the architecture search is a large-scale multiobjective optimization problem (LSMOP), where a large number of parameters need to be set with the aim of optimizing accuracy and structural complexity simultaneously. However, the issues of irregular Pareto front, objective discontinuity, and population degeneration strongly limit the performances of conventional multiobjective evolutionary algorithms (MOEAs) on the specific problem. Therefore, a novel competitive decomposition-based MOEA is proposed in this study, which decomposes the original problem into several constrained subproblems, with neighboring subproblems sharing overlapping regions in the objective space. The solutions in the overlapping regions participate in environmental selection for the neighboring subproblems and then propagate the selection pressure throughout the entire population. Experimental results demonstrate that the proposed algorithm can possess a more powerful optimization ability than the state-of-the-art MOEAs. Furthermore, both the DNM itself and its hardware implementation can achieve very competitive classification performances when trained by the proposed algorithm, compared with numerous widely used machine-learning approaches.
Collapse
|
9
|
Gu Q, Sun Y, Wang Q, Chen L. A quadratic association vector and dynamic guided operator search algorithm for large-scale sparse multi-objective optimization problem. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04500-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
|
10
|
He C, Li L, Cheng R, Jin Y. Evolutionary multiobjective optimization via efficient sampling-based offspring generation. COMPLEX INTELL SYST 2023. [DOI: 10.1007/s40747-023-00990-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
AbstractWith the rising number of large-scale multiobjective optimization problems from academia and industries, some evolutionary algorithms (EAs) with different decision variable handling strategies have been proposed in recent years. They mainly emphasize the balance between convergence enhancement and diversity maintenance for multiobjective optimization but ignore the local search tailored for large-scale optimization. Consequently, most existing EAs can hardly obtain the global or local optima. To address this issue, we propose an efficient sampling-based offspring generation method for large-scale multiobjective optimization, where convergence enhancement and diversity maintenance, together with ad hoc local search, are considered. First, the decision variables are dynamically classified into two types for solving large-scale decision space in a divide-and-conquer manner. Then, a convergence-related sampling strategy is designed to handle those decision variables related to convergence enhancement. Two additional sampling strategies are proposed for diversity maintenance and local search, respectively. Experimental results on problems with up to 5000 decision variables have indicated the effectiveness of the algorithm in large-scale multiobjective optimization.
Collapse
|
11
|
A Pearson correlation-based adaptive variable grouping method for large-scale multi-objective optimization. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.02.055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
|
12
|
Ren J, Qiu F, Hu H. Multiple sparse detection-based evolutionary algorithm for large-scale sparse multiobjective optimization problems. COMPLEX INTELL SYST 2023. [DOI: 10.1007/s40747-022-00963-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
AbstractSparse multiobjective optimization problems are common in practical applications. Such problems are characterized by large-scale decision variables and sparse optimal solutions. General large-scale multiobjective optimization problems (LSMOPs) have been extensively studied for many years. They can be well solved by many excellent custom algorithms. However, when these algorithms are used to deal with sparse LSMOPs, they often encounter difficulties because the sparse nature of the problem is not considered. Therefore, aiming at sparse LSMOPs, an algorithm based on multiple sparse detection is proposed in this paper. The algorithm applies an adaptive sparse genetic operator that can generate sparse solutions by detecting the sparsity of individuals. To improve the deficiency of sparse detection caused by local detection, an enhanced sparse detection (ESD) strategy is proposed in this paper. The strategy uses binary coefficient vectors to integrate the masks of nondominated solutions. Essentially, the mask is globally and deeply optimized by coefficient vectors to enhance the sparsity of the solutions. In addition, the algorithm adopts an improved weighted optimization strategy to fully optimize the key nonzero variables to balance exploration and optimization. Finally, the proposed algorithm is named MOEA-ESD and is compared to the current state-of-the-art algorithm to verify its effectiveness.
Collapse
|
13
|
A dual decomposition strategy for large-scale multiobjective evolutionary optimization. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08133-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
14
|
Li J, Ma Y, Gao R, Cao Z, Lim A, Song W, Zhang J. Deep Reinforcement Learning for Solving the Heterogeneous Capacitated Vehicle Routing Problem. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13572-13585. [PMID: 34554923 DOI: 10.1109/tcyb.2021.3111082] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Existing deep reinforcement learning (DRL)-based methods for solving the capacitated vehicle routing problem (CVRP) intrinsically cope with a homogeneous vehicle fleet, in which the fleet is assumed as repetitions of a single vehicle. Hence, their key to construct a solution solely lies in the selection of the next node (customer) to visit excluding the selection of vehicle. However, vehicles in real-world scenarios are likely to be heterogeneous with different characteristics that affect their capacity (or travel speed), rendering existing DRL methods less effective. In this article, we tackle heterogeneous CVRP (HCVRP), where vehicles are mainly characterized by different capacities. We consider both min-max and min-sum objectives for HCVRP, which aim to minimize the longest or total travel time of the vehicle(s) in the fleet. To solve those problems, we propose a DRL method based on the attention mechanism with a vehicle selection decoder accounting for the heterogeneous fleet constraint and a node selection decoder accounting for the route construction, which learns to construct a solution by automatically selecting both a vehicle and a node for this vehicle at each step. Experimental results based on randomly generated instances show that, with desirable generalization to various problem sizes, our method outperforms the state-of-the-art DRL method and most of the conventional heuristics, and also delivers competitive performance against the state-of-the-art heuristic method, that is, slack induction by string removal. In addition, the results of extended experiments demonstrate that our method is also able to solve CVRPLib instances with satisfactory performance.
Collapse
|
15
|
Liu S, Lin Q, Tian Y, Tan KC. A Variable Importance-Based Differential Evolution for Large-Scale Multiobjective Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13048-13062. [PMID: 34406958 DOI: 10.1109/tcyb.2021.3098186] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Large-scale multiobjective optimization problems (LMOPs) bring significant challenges for traditional evolutionary operators, as their search capability cannot efficiently handle the huge decision space. Some newly designed search methods for LMOPs usually classify all variables into different groups and then optimize the variables in the same group with the same manner, which can speed up the population's convergence. Following this research direction, this article suggests a differential evolution (DE) algorithm that favors searching the variables with higher importance to the solving of LMOPs. The importance of each variable to the target LMOP is quantized and then all variables are categorized into different groups based on their importance. The variable groups with higher importance are allocated with more computational resources using DE. In this way, the proposed method can efficiently generate offspring in a low-dimensional search subspace formed by more important variables, which can significantly speed up the convergence. During the evolutionary process, this search subspace for DE will be expanded gradually, which can strike a good balance between exploration and exploitation in tackling LMOPs. Finally, the experiments validate that our proposed algorithm can perform better than several state-of-the-art evolutionary algorithms for solving various benchmark LMOPs.
Collapse
|
16
|
Improving evolutionary algorithms with information feedback model for large-scale many-objective optimization. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03964-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
17
|
Tian Y, Zhang Y, Su Y, Zhang X, Tan KC, Jin Y. Balancing Objective Optimization and Constraint Satisfaction in Constrained Evolutionary Multiobjective Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:9559-9572. [PMID: 33729963 DOI: 10.1109/tcyb.2020.3021138] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Both objective optimization and constraint satisfaction are crucial for solving constrained multiobjective optimization problems, but the existing evolutionary algorithms encounter difficulties in striking a good balance between them when tackling complex feasible regions. To address this issue, this article proposes a two-stage evolutionary algorithm, which adjusts the fitness evaluation strategies during the evolutionary process to adaptively balance objective optimization and constraint satisfaction. The proposed algorithm can switch between the two stages according to the status of the current population, enabling the population to cross the infeasible region and reach the feasible regions in one stage, and to spread along the feasible boundaries in the other stage. Experimental studies on four benchmark suites and three real-world applications demonstrate the superiority of the proposed algorithm over the state-of-the-art algorithms, especially on problems with complex feasible regions.
Collapse
|
18
|
Recursive grouping and dynamic resource allocation method for large-scale multi-objective optimization problem. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
19
|
Yan Z, Tan Y, Chen H, Meng L, Zhang H. An operator pre-selection strategy for multiobjective evolutionary algorithm based on decomposition. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.08.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
20
|
Yang S, Tian Y, He C, Zhang X, Tan KC, Jin Y. A Gradient-Guided Evolutionary Approach to Training Deep Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4861-4875. [PMID: 33661739 DOI: 10.1109/tnnls.2021.3061630] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
It has been widely recognized that the efficient training of neural networks (NNs) is crucial to classification performance. While a series of gradient-based approaches have been extensively developed, they are criticized for the ease of trapping into local optima and sensitivity to hyperparameters. Due to the high robustness and wide applicability, evolutionary algorithms (EAs) have been regarded as a promising alternative for training NNs in recent years. However, EAs suffer from the curse of dimensionality and are inefficient in training deep NNs (DNNs). By inheriting the advantages of both the gradient-based approaches and EAs, this article proposes a gradient-guided evolutionary approach to train DNNs. The proposed approach suggests a novel genetic operator to optimize the weights in the search space, where the search direction is determined by the gradient of weights. Moreover, the network sparsity is considered in the proposed approach, which highly reduces the network complexity and alleviates overfitting. Experimental results on single-layer NNs, deep-layer NNs, recurrent NNs, and convolutional NNs (CNNs) demonstrate the effectiveness of the proposed approach. In short, this work not only introduces a novel approach for training DNNs but also enhances the performance of EAs in solving large-scale optimization problems.
Collapse
|
21
|
An improved large-scale sparse multi-objective evolutionary algorithm using unsupervised neural network. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04037-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
22
|
Nasseh Chaffi B, Rahmani M. A novel two-phase hybrid selection mechanism feeder to improve performance of many-objective optimization algorithms. EVOLUTIONARY INTELLIGENCE 2022. [DOI: 10.1007/s12065-022-00763-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
23
|
Li Y, Li W, Zhao Y, Li S. Hybrid multi-objective optimization algorithm based on angle competition and neighborhood protection mechanism. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03920-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
24
|
Ge Y, Chen D, Zou F, Fu M, Ge F. Large-scale multiobjective optimization with adaptive competitive swarm optimizer and inverse modeling. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.07.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
Pan A, Shen B, Wang L. Ensemble of resource allocation strategies in decision and objective spaces for multiobjective optimization. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
26
|
Su Y, Jin Z, Tian Y, Zhang X, Tan KC. Comparing the Performance of Evolutionary Algorithms for Sparse Multi-Objective Optimization via a Comprehensive Indicator [Research Frontier]. IEEE COMPUT INTELL M 2022. [DOI: 10.1109/mci.2022.3180913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
| | | | | | | | - Kay Chen Tan
- The Hong Kong Polytechnic University, Hong Kong SAR
| |
Collapse
|
27
|
A tri-stage competitive swarm optimizer for constrained multi-objective optimization. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03874-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
28
|
Wang F, Wang X, Sun S. A reinforcement learning level-based particle swarm optimization algorithm for large-scale optimization. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.04.053] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
29
|
Tian Y, Lu C, Zhang X, Cheng F, Jin Y. A Pattern Mining-Based Evolutionary Algorithm for Large-Scale Sparse Multiobjective Optimization Problems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:6784-6797. [PMID: 33378271 DOI: 10.1109/tcyb.2020.3041325] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In real-world applications, there exist a lot of multiobjective optimization problems whose Pareto-optimal solutions are sparse, that is, most variables of these solutions are 0. Generally, many sparse multiobjective optimization problems (SMOPs) contain a large number of variables, which pose grand challenges for evolutionary algorithms to find the optimal solutions efficiently. To address the curse of dimensionality, this article proposes an evolutionary algorithm for solving large-scale SMOPs, which aims to mine the sparse distribution of the Pareto-optimal solutions and, thus, considerably reduces the search space. More specifically, the proposed algorithm suggests an evolutionary pattern mining approach to detect the maximum and minimum candidate sets of the nonzero variables in the Pareto-optimal solutions, and uses them to limit the dimensions in generating offspring solutions. For further performance enhancement, a binary crossover operator and a binary mutation operator are designed to ensure the sparsity of solutions. According to the results on eight benchmark problems and four real-world problems, the proposed algorithm is superior over existing evolutionary algorithms in solving large-scale SMOPs.
Collapse
|
30
|
Qi S, Zou J, Yang S, Jin Y, Zheng J, Yang X. A Self-exploratory Competitive Swarm Optimization Algorithm for Large-Scale Multiobjective Optimization. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.07.110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
31
|
Hamza MA, Albraikan AA, Alzahrani JS, Dhahbi S, Al-Turaiki I, Al Duhayyim M, Yaseen I, Eldesouki MI. Optimal Deep Transfer Learning-Based Human-Centric Biomedical Diagnosis for Acute Lymphoblastic Leukemia Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7954111. [PMID: 35676951 PMCID: PMC9170437 DOI: 10.1155/2022/7954111] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/25/2022] [Accepted: 05/09/2022] [Indexed: 11/17/2022]
Abstract
Human-centric biomedical diagnosis (HCBD) becomes a hot research topic in the healthcare sector, which assists physicians in the disease diagnosis and decision-making process. Leukemia is a pathology that affects younger people and adults, instigating early death and a number of other symptoms. Computer-aided detection models are found to be useful for reducing the probability of recommending unsuitable treatments and helping physicians in the disease detection process. Besides, the rapid development of deep learning (DL) models assists in the detection and classification of medical-imaging-related problems. Since the training of DL models necessitates massive datasets, transfer learning models can be employed for image feature extraction. In this view, this study develops an optimal deep transfer learning-based human-centric biomedical diagnosis model for acute lymphoblastic detection (ODLHBD-ALLD). The presented ODLHBD-ALLD model mainly intends to detect and classify acute lymphoblastic leukemia using blood smear images. To accomplish this, the ODLHBD-ALLD model involves the Gabor filtering (GF) technique as a noise removal step. In addition, it makes use of a modified fuzzy c-means (MFCM) based segmentation approach for segmenting the images. Besides, the competitive swarm optimization (CSO) algorithm with the EfficientNetB0 model is utilized as a feature extractor. Lastly, the attention-based long-short term memory (ABiLSTM) model is employed for the proper identification of class labels. For investigating the enhanced performance of the ODLHBD-ALLD approach, a wide range of simulations were executed on open access dataset. The comparative analysis reported the betterment of the ODLHBD-ALLD model over the other existing approaches.
Collapse
Affiliation(s)
- Manar Ahmed Hamza
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Amani Abdulrahman Albraikan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Jaber S. Alzahrani
- Department of Industrial Engineering, College of Engineering at Alqunfudah, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Sami Dhahbi
- Department of Computer Science, College of Science & Art at Mahayil, King Khalid University, Abha, Saudi Arabia
| | - Isra Al-Turaiki
- Department of Information Technology, College of Computer and Information Sciences, King Saud University, P.O. BOX 145111, Riyadh 4545, Saudi Arabia
| | - Mesfer Al Duhayyim
- Department of Computer Science, College of Sciences and Humanities- Aflaj, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Ishfaq Yaseen
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Mohamed I. Eldesouki
- Department of Information System, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| |
Collapse
|
32
|
Machine learning-based framework to cover optimal Pareto-front in many-objective optimization. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00759-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractOne of the crucial challenges of solving many-objective optimization problems is uniformly well covering of the Pareto-front (PF). However, many the state-of-the-art optimization algorithms are capable of approximating the shape of many-objective PF by generating a limited number of non-dominated solutions. The exponential increase of the population size is an inefficient strategy that increases the computational complexity of the algorithm dramatically—especially when solving many-objective problems. In this paper, we introduce a machine learning-based framework to cover sparse PF surface which is initially generated by many-objective optimization algorithms; either by classical or meta-heuristic methods. The proposed method, called many-objective reverse mapping (MORM), is based on constructing a learning model on the initial PF set as the training data to reversely map the objective values to corresponding decision variables. Using the trained model, a set of candidate solutions can be generated by a variety of inexpensive generative techniques such as Opposition-based Learning and Latin Hypercube Sampling in both objective and decision spaces. Iteratively generated non-dominated candidate solutions cover the initial PF efficiently with no further need to utilize any optimization algorithm. We validate the proposed framework using a set of well-known many-objective optimization benchmarks and two well-known real-world problems. The coverage of PF is illustrated and numerically compared with the state-of-the-art many-objective algorithms. The statistical tests conducted on comparison measures such as HV, IGD, and the contribution ratio on the built PF reveal that the proposed collaborative framework surpasses the competitors on most of the problems. In addition, MORM covers the PF effectively compared to other methods even with the aid of large population size.
Collapse
|
33
|
Zhou X, Bai W, He J, Dai J, Liu P, Zhao Y, Bao G. An Enhanced Positional Error Compensation Method for Rock Drilling Robots Based on LightGBM and RBFN. Front Neurorobot 2022; 16:883816. [PMID: 35645760 PMCID: PMC9136075 DOI: 10.3389/fnbot.2022.883816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
Rock drilling robots are able to greatly reduce labor intensity and improve efficiency and quality in tunnel construction. However, due to the characteristics of the heavy load, large span, and multi-joints of the robot manipulator, the errors are diverse and non-linear, which pose challenges to the intelligent and high-precision control of the robot manipulator. In order to enhance the control accuracy, a hybrid positional error compensation method based on Radial Basis Function Network (RBFN) and Light Gradient Boosting Decision Tree (LightGBM) is proposed for the rock drilling robot. Firstly, the kinematics model of the robotic manipulator is established by applying MDH. Then a parallel difference algorithm is designed to modify the kinematics parameters to compensate for the geometric error. Afterward, non-geometric errors are analyzed and compensated by applying RBFN and lightGBM including features and kinematics model. Finally, the experiments of the error compensation by combing combining the geometric and non-geometric errors verify the performance of the proposed method.
Collapse
Affiliation(s)
- Xuanyi Zhou
- Key Laboratory of Special Purpose Equipment and Advanced Processing Technology, Zhejiang University of Technology, Hangzhou, China
| | - Wenyu Bai
- Key Laboratory of Special Purpose Equipment and Advanced Processing Technology, Zhejiang University of Technology, Hangzhou, China
- Zhejiang Jinbang Sports Equipment Co. Ltd., Zhejiang, China
- *Correspondence: Wenyu Bai
| | - Jilin He
- State Key Laboratory of High Performance Complex Manufacturing, Central South University, Changsha, China
| | - Ju Dai
- State Key Laboratory of High Performance Complex Manufacturing, Central South University, Changsha, China
| | - Peng Liu
- Sunward Intelligent Equipment Company, Ltd., Changsha, China
| | - Yuming Zhao
- Sunward Intelligent Equipment Company, Ltd., Changsha, China
- Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Guanjun Bao
- Key Laboratory of Special Purpose Equipment and Advanced Processing Technology, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
34
|
Chen ZG, Zhan ZH, Kwong S, Zhang J. Evolutionary Computation for Intelligent Transportation in Smart Cities: A Survey [Review Article]. IEEE COMPUT INTELL M 2022. [DOI: 10.1109/mci.2022.3155330] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
35
|
Liu Q, Jin Y, Heiderich M, Rodemann T, Yu G. An Adaptive Reference Vector-Guided Evolutionary Algorithm Using Growing Neural Gas for Many-Objective Optimization of Irregular Problems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:2698-2711. [PMID: 33001813 DOI: 10.1109/tcyb.2020.3020630] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Most reference vector-based decomposition algorithms for solving multiobjective optimization problems may not be well suited for solving problems with irregular Pareto fronts (PFs) because the distribution of predefined reference vectors may not match well with the distribution of the Pareto-optimal solutions. Thus, the adaptation of the reference vectors is an intuitive way for decomposition-based algorithms to deal with irregular PFs. However, most existing methods frequently change the reference vectors based on the activeness of the reference vectors within specific generations, slowing down the convergence of the search process. To address this issue, we propose a new method to learn the distribution of the reference vectors using the growing neural gas (GNG) network to achieve automatic yet stable adaptation. To this end, an improved GNG is designed for learning the topology of the PFs with the solutions generated during a period of the search process as the training data. We use the individuals in the current population as well as those in previous generations to train the GNG to strike a balance between exploration and exploitation. Comparative studies conducted on popular benchmark problems and a real-world hybrid vehicle controller design problem with complex and irregular PFs show that the proposed method is very competitive.
Collapse
|
36
|
Liu G, Zhou R, Xu S, Zhu Y, Guo W, Chen YC, Chen G. Two-stage Competitive Particle Swarm Optimization Based Timing-driven X-routing for IC Design under Smart Manufacturing. ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS 2022. [DOI: 10.1145/3531328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
As the timing delay becomes a critical issue of the chip performance, there comes a burning desire for IC design under smart manufacturing to optimize the delay. As the best connection model for multi-terminal nets, the wirelength and the maximum source-to-sink pathlength of the Steiner minimum tree are both the decisive factors of timing delay for routing. In addition, considering that X-routing can get the utmost out of routing resources, this paper proposes a Timing-Driven X-routing Steiner Minimum Tree (TD-XSMT) algorithm based on two-stage competitive particle swarm optimization. The paper utilizes the multi-objective particle swarm optimization algorithm and redesigns its framework, thus improving its performance. First, a two-stage learning strategy is presented, which balances the exploration and exploitation capabilities of the particle by learning edge structures and pseudo-Steiner point choices. Especially in the second stage, a hybrid crossover strategy is designed to guarantee convergence quality. Second, the competition mechanism is adopted to select particle learning objects and enhance diversity. Finally, according to the characteristics of the discrete TD-XSMT problem, the mutation and crossover operators of the genetic algorithm are used to effectively discretize the proposed algorithm. Experimental results reveal that TSCPSO-TD-XSMT can obtain a smooth trade-off between wirelength and maximum source-to-sink pathlength, and achieve distinguished timing delay optimization.
Collapse
Affiliation(s)
- Genggeng Liu
- College of Computer and Data Science, Fuzhou University, China
| | - Ruping Zhou
- College of Computer and Data Science, Fuzhou University, China
| | - Saijuan Xu
- Department of Information Engineering, Fujian Business University, China
| | - Yuhan Zhu
- College of Computer and Data Science, Fuzhou University, China
| | - Wenzhong Guo
- College of Computer and Data Science, Fuzhou University, China
| | - Yeh-Cheng Chen
- Department of Computer Science, University of California, USA
| | - Guolong Chen
- College of Computer and Data Science, Fuzhou University, China
| |
Collapse
|
37
|
Elite Directed Particle Swarm Optimization with Historical Information for High-Dimensional Problems. MATHEMATICS 2022. [DOI: 10.3390/math10091384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
High-dimensional optimization problems are ubiquitous in every field nowadays, which seriously challenge the optimization ability of existing optimizers. To solve this kind of optimization problems effectively, this paper proposes an elite-directed particle swarm optimization (EDPSO) with historical information to explore and exploit the high-dimensional solution space efficiently. Specifically, in EDPSO, the swarm is first separated into two exclusive sets based on the Pareto principle (80-20 rule), namely the elite set containing the top best 20% of particles and the non-elite set consisting of the remaining 80% of particles. Then, the non-elite set is further separated into two layers with the same size from the best to the worst. As a result, the swarm is divided into three layers. Subsequently, particles in the third layer learn from those in the first two layers, while particles in the second layer learn from those in the first layer, on the condition that particles in the first layer remain unchanged. In this way, the learning effectiveness and the learning diversity of particles could be largely promoted. To further enhance the learning diversity of particles, we maintain an additional archive to store obsolete elites, and use the predominant elites in the archive along with particles in the first two layers to direct the update of particles in the third layer. With these two mechanisms, the proposed EDPSO is expected to compromise search intensification and diversification well at the swarm level and the particle level, to explore and exploit the solution space. Extensive experiments are conducted on the widely used CEC’2010 and CEC’2013 high-dimensional benchmark problem sets to validate the effectiveness of the proposed EDPSO. Compared with several state-of-the-art large-scale algorithms, EDPSO is demonstrated to achieve highly competitive or even much better performance in tackling high-dimensional problems.
Collapse
|
38
|
He C, Li M, Zhang C, Chen H, Li X, Li J. A competitive swarm optimizer with probabilistic criteria for many-objective optimization problems. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00714-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractAlthough multiobjective particle swarm optimizers (MOPSOs) have performed well on multiobjective optimization problems (MOPs) in recent years, there are still several noticeable challenges. For example, the traditional particle swarm optimizers are incapable of correctly discriminating between the personal and global best particles in MOPs, possibly leading to the MOPSOs lacking sufficient selection pressure toward the true Pareto front (PF). In addition, some particles will be far from the PF after updating, this may lead to invalid search and weaken the convergence efficiency. To address the abovementioned issues, we propose a competitive swarm optimizer with probabilistic criteria for many-objective optimization problems (MaOPs). First, we exploit a probability estimation method to select the leaders via the probability space, which ensures the search direction to be correct. Second, we design a novel competition mechanism that uses winner pool instead of the global and personal best particles to guide the entire population toward the true PF. Third, we construct an environment selection scheme with the mixed probability criterion to maintain population diversity. Finally, we present a swarm update strategy to ensure that the next generation particles are valid and the invalid search is avoided. We employ various benchmark problems with 3–15 objectives to conduct a comprehensive comparison between the presented method and several state-of-the-art approaches. The comparison results demonstrate that the proposed method performs well in terms of searching efficiency and population diversity, and especially shows promising potential for large-scale multiobjective optimization problems.
Collapse
|
39
|
Pan JS, Liu N, Chu SC. A competitive mechanism based multi-objective differential evolution algorithm and its application in feature selection. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108582] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
40
|
Huang W, Zhang W. Multi-objective optimization based on an adaptive competitive swarm optimizer. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.11.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
41
|
Ge H, Zhang N, Sun L, Wang X, Hou Y. A memetic evolution system with statistical variable classification for large-scale many-objective optimization. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108158] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
42
|
A novel solver for multi-objective optimization: dynamic non-dominated sorting genetic algorithm (DNSGA). Soft comput 2021. [DOI: 10.1007/s00500-021-06223-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
43
|
Wu X, Wang Y, Tian S, Wang Z. A Reference Point Selection and Direction Guidance-Based Algorithm for Large-Scale Multi-Objective Optimization. INT J PATTERN RECOGN 2021. [DOI: 10.1142/s0218001421590588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Designing efficient algorithms for the large-scale multi-objective problems (LSMOPs) is a very challenging problem currently. To tackle LSMOPs, we first design a new reference points selection strategy to enhance the diversity of algorithms and avoid the algorithm falling into local minima. The strategy selects not only a part of nondominated solutions with the largest crowding distance, but also a part of relatively uniformly distributed solutions as the reference points. In this way, much better diversity and convergence can be obtained. Second, we propose a direction-guided offspring generation strategy, where a type of potential directions is designed to generate the promising solutions which can balance the convergence and diversity of the obtained solutions and improve the effectiveness of the algorithm significantly. Based on the two proposed strategies, we propose a new effective algorithm for LSMOPs. Numerical experiments are executed on two widely used large-scale multi-objective benchmark problem sets with 200, 500 and 1000 decision variables and a comparison with five state-of-the-art algorithms is made. The experimental results show that our proposed algorithm is effective and can obtain significantly better solutions than the compared algorithms.
Collapse
Affiliation(s)
- Xiangjuan Wu
- School of Computer Science and Technology, Xidian University, Xi’an 710071, P. R. China
| | - Yuping Wang
- School of Computer Science and Technology, Xidian University, Xi’an 710071, P. R. China
| | - Shuai Tian
- School of Computer Science and Technology, Xidian University, Xi’an 710071, P. R. China
| | - Ziqing Wang
- School of Computer Science and Technology, Xidian University, Xi’an 710071, P. R. China
| |
Collapse
|
44
|
Abstract
AbstractSparse large-scale multi-objective optimization problems (LSMOPs) widely exist in real-world applications, which have the properties of involving a large number of decision variables and sparse Pareto optimal solutions, i.e., most decision variables of these solutions are zero. In recent years, sparse LSMOPs have attracted increasing attentions in the evolutionary computation community. However, all the recently tailored algorithms for sparse LSMOPs put the sparsity detection and maintenance in the first place, where the nonzero variables can hardly be optimized sufficiently within a limited budget of function evaluations. To address this issue, this paper proposes to enhance the connection between real variables and binary variables within the two-layer encoding scheme with the assistance of variable grouping techniques. In this way, more efforts can be devoted to the real part of nonzero variables, achieving the balance between sparsity maintenance and variable optimization. According to the experimental results on eight benchmark problems and three real-world applications, the proposed algorithm is superior over existing state-of-the-art evolutionary algorithms for sparse LSMOPs.
Collapse
|
45
|
Han F, Zheng M, Ling Q. An improved multiobjective particle swarm optimization algorithm based on tripartite competition mechanism. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02665-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
46
|
Premkumar M, Jangir P, Sowmya R. MOGBO: A new Multiobjective Gradient-Based Optimizer for real-world structural optimization problems. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.106856] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
47
|
Chen L, Wang H, Ma W. Two-stage multi-tasking transform framework for large-scale many-objective optimization problems. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00273-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractReal-world optimization applications in complex systems always contain multiple factors to be optimized, which can be formulated as multi-objective optimization problems. These problems have been solved by many evolutionary algorithms like MOEA/D, NSGA-III, and KnEA. However, when the numbers of decision variables and objectives increase, the computation costs of those mentioned algorithms will be unaffordable. To reduce such high computation cost on large-scale many-objective optimization problems, we proposed a two-stage framework. The first stage of the proposed algorithm combines with a multi-tasking optimization strategy and a bi-directional search strategy, where the original problem is reformulated as a multi-tasking optimization problem in the decision space to enhance the convergence. To improve the diversity, in the second stage, the proposed algorithm applies multi-tasking optimization to a number of sub-problems based on reference points in the objective space. In this paper, to show the effectiveness of the proposed algorithm, we test the algorithm on the DTLZ and LSMOP problems and compare it with existing algorithms, and it outperforms other compared algorithms in most cases and shows disadvantage on both convergence and diversity.
Collapse
|
48
|
Cheng S, Zhan H, Yao H, Fan H, Liu Y. Large-scale many-objective particle swarm optimizer with fast convergence based on Alpha-stable mutation and Logistic function. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2020.106947] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
49
|
Prajapati A. A comparative study of many-objective optimizers on large-scale many-objective software clustering problems. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00270-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractOver the past 2 decades, several multi-objective optimizers (MOOs) have been proposed to address the different aspects of multi-objective optimization problems (MOPs). Unfortunately, it has been observed that many of MOOs experiences performance degradation when applied over MOPs having a large number of decision variables and objective functions. Specially, the performance of MOOs rapidly decreases when the number of decision variables and objective functions increases by more than a hundred and three, respectively. To address the challenges caused by such special case of MOPs, some large-scale multi-objective optimization optimizers (L-MuOOs) and large-scale many-objective optimization optimizers (L-MaOOs) have been developed in the literature. Even after vast development in the direction of L-MuOOs and L-MaOOs, the supremacy of these optimizers has not been tested on real-world optimization problems containing a large number of decision variables and objectives such as large-scale many-objective software clustering problems (L-MaSCPs). In this study, the performance of nine L-MuOOs and L-MaOOs (i.e., S3-CMA-ES, LMOSCO, LSMOF, LMEA, IDMOPSO, ADC-MaOO, NSGA-III, H-RVEA, and DREA) is evaluated and compared over five L-MaSCPs in terms of IGD, Hypervolume, and MQ metrics. The experimentation results show that the S3-CMA-ES and LMOSCO perform better compared to the LSMOF, LMEA, IDMOPSO, ADC-MaOO, NSGA-III, H-RVEA, and DREA in most of the cases. The LSMOF, LMEA, IDMOPSO, ADC-MaOO, NSGA-III, and DREA, are the average performer, and H-RVEA is the worst performer.
Collapse
|
50
|
Xue F, Dong T, You S, Liu Y, Tang H, Chen L, Yang X, Li J. A hybrid many-objective competitive swarm optimization algorithm for large-scale multirobot task allocation problem. INT J MACH LEARN CYB 2020. [DOI: 10.1007/s13042-020-01213-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|