1
|
Xue ZF, Wang ZJ, Zhan ZH, Kwong S, Zhang J. Neural Network-Based Knowledge Transfer for Multitask Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:7541-7554. [PMID: 39383079 DOI: 10.1109/tcyb.2024.3469371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/11/2024]
Abstract
Knowledge transfer (KT) is crucial for optimizing tasks in evolutionary multitask optimization (EMTO). However, most existing KT methods can only achieve superficial KT but lack the ability to deeply mine the similarities or relationships among different tasks. This limitation may result in negative transfer, thereby degrading the KT performance. As the KT efficiency strongly depends on the similarities of tasks, this article proposes a neural network (NN)-based KT (NNKT) method to analyze the similarities of tasks and obtain the transfer models for information prediction between different tasks for high-quality KT. First, NNKT collects and pairs the solutions of multiple tasks and trains the NNs to obtain the transfer models between tasks. Second, the obtained NNs transfer knowledge by predicting new promising solutions. Meanwhile, a simple adaptive strategy is developed to find the suitable population size to satisfy various search requirements during the evolution process. Comparison of the experimental results between the proposed NN-based multitask optimization (NNMTO) algorithm and some state-of-the-art multitask algorithms on the IEEE Congress on Evolutionary Computation (IEEE CEC) 2017 and IEEE CEC2022 benchmarks demonstrate the efficiency and effectiveness of the NNMTO. Moreover, NNKT can be seamlessly applied to other EMTO algorithms to further enhance their performances. Finally, the NNMTO is applied to a real-world multitask rover navigation application problem to further demonstrate its applicability.
Collapse
|
2
|
Wang Y, Zhang Q, Wang GG, Cheng H. The application of evolutionary computation in generative adversarial networks (GANs): a systematic literature survey. Artif Intell Rev 2024; 57:182. [DOI: 10.1007/s10462-024-10818-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/28/2024] [Indexed: 01/04/2025]
Abstract
AbstractAs a subfield of deep learning (DL), generative adversarial networks (GANs) have produced impressive generative results by applying deep generative models to create synthetic data and by performing an adversarial training process. Nevertheless, numerous issues related to the instability of training need to be urgently addressed. Evolutionary computation (EC), using the corresponding paradigm of biological evolution, overcomes these problems and improves evolutionary-based GANs’ ability to deal with real-world applications. Therefore, this paper presents a systematic literature survey combining EC and GANs. First, the basic theories of GANs and EC are analyzed and summarized. Second, to provide readers with a comprehensive view, this paper outlines the recent advances in combining EC and GANs after detailed classification and introduces each of them. These classifications include evolutionary GANs and their variants, GANs with evolutionary strategies and differential evolution, GANs combined with neuroevolution, evolutionary GANs related to different optimization problems, and applications of evolutionary GANs. Detailed information on the evaluation metrics, network structures, and comparisons of these models is presented in several tables. Finally, future directions and possible perspectives for further development are discussed.
Collapse
|
3
|
Li L, Li Y, Lin Q, Liu S, Zhou J, Ming Z, Coello Coello CA. Neural Net-Enhanced Competitive Swarm Optimizer for Large-Scale Multiobjective Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3502-3515. [PMID: 37486827 DOI: 10.1109/tcyb.2023.3287596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
The competitive swarm optimizer (CSO) classifies swarm particles into loser and winner particles and then uses the winner particles to efficiently guide the search of the loser particles. This approach has very promising performance in solving large-scale multiobjective optimization problems (LMOPs). However, most studies of CSOs ignore the evolution of the winner particles, although their quality is very important for the final optimization performance. Aiming to fill this research gap, this article proposes a new neural net-enhanced CSO for solving LMOPs, called NN-CSO, which not only guides the loser particles via the original CSO strategy, but also applies our trained neural network (NN) model to evolve winner particles. First, the swarm particles are classified into winner and loser particles by the pairwise competition. Then, the loser particles and winner particles are, respectively, treated as the input and desired output to train the NN model, which tries to learn promising evolutionary dynamics by driving the loser particles toward the winners. Finally, when model training is complete, the winner particles are evolved by the well-trained NN model, while the loser particles are still guided by the winner particles to maintain the search pattern of CSOs. To evaluate the performance of our designed NN-CSO, several LMOPs with up to ten objectives and 1000 decision variables are adopted, and the experimental results show that our designed NN model can significantly improve the performance of CSOs and shows some advantages over several state-of-the-art large-scale multiobjective evolutionary algorithms as well as over model-based evolutionary algorithms.
Collapse
|
4
|
Cheng H, Wang GG, Chen L, Wang R. A dual-population multi-objective evolutionary algorithm driven by generative adversarial networks for benchmarking and protein-peptide docking. Comput Biol Med 2024; 168:107727. [PMID: 38029532 DOI: 10.1016/j.compbiomed.2023.107727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 09/28/2023] [Accepted: 11/15/2023] [Indexed: 12/01/2023]
Abstract
Multi-objective optimization problems (MOPs) are characterized as optimization problems in which multiple conflicting objective functions are optimized simultaneously. To solve MOPs, some algorithms used machine learning models to drive the evolutionary algorithms, leading to the design of a variety of model-based evolutionary algorithms. However, model collapse occurs during the generation of candidate solutions, which results in local optima and poor diversity in model-based evolutionary algorithms. To address this problem, we propose a dual-population multi-objective evolutionary algorithm driven by Wasserstein generative adversarial network with gradient penalty (DGMOEA), where the dual-populations coordinate and cooperate to generate high-quality solutions, thus improving the performance of the evolutionary algorithm. We compare the proposed algorithm with the 7 state-of-the-art algorithms on 20 multi-objective benchmark functions. Experimental results indicate that DGMOEA achieves significant results in solving MOPs, where the metrics IGD and HV outperform the other comparative algorithms on 15 and 18 out of 20 benchmarks, respectively. Our algorithm is evaluated on the LEADS-PEP dataset containing 53 protein-peptide complexes, and the experimental results on solving the protein-peptide docking problem indicated that DGMOEA can effectively reduce the RMSD between the generated and the original peptide's 3D poses and achieve more competitive results.
Collapse
Affiliation(s)
- Honglei Cheng
- School of Computer Science and Technology, Ocean University of China, Qingdao, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, China.
| | - Liyan Chen
- Institute of Big Data and Information Technology, Wenzhou University, Wenzhou, China
| | - Rui Wang
- College of Systems Engineering, National University of Defense Technology, Changsha, China; Xiangjiang Laboratory, Changsha, China
| |
Collapse
|
5
|
Zhang K, Shen C, Yen GG. Multipopulation-Based Differential Evolution for Large-Scale Many-Objective Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7596-7608. [PMID: 35731754 DOI: 10.1109/tcyb.2022.3178929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In recent years, numerous efficient many-objective optimization evolutionary algorithms have been proposed to find well-converged and well-distributed nondominated optimal solutions. However, their scalability performance may deteriorate drastically to solve large-scale many-objective optimization problems (LSMaOPs). Encountering high-dimensional solution space with more than 100 decision variables, some of them may lose diversity and trap into local optima, while others may achieve poor convergence performance. This article proposes a multipopulation-based differential evolution algorithm, called LSMaODE, which can solve LSMaOPs efficiently and effectively. In order to exploit and explore the exponential decision space, the proposed algorithm divides the population into two groups of subpopulations, which are optimized with different strategies. First, the randomized coordinate descent technique is applied to 10% of individuals to exploit the decision variables independently. This subpopulation maintains diversity in the decision space to avoid premature convergence into local optimum. Second, the remaining 90% of individuals are optimized with the nondominated guided random interpolation strategy, which interpolates individual among three nondominated solutions randomly. The strategy can guide the population convergent toward the nondominated solutions quickly, meanwhile, maintain good distribution in objective space. Finally, the proposed LSMaODE is evaluated on the LSMOP test suites from the scalability in both decision and objective dimensions. The performance is compared against five state-of-the-art large-scale many-objective evolutionary algorithms. The experimental results show that LSMaODE provides highly competitive performance.
Collapse
|
6
|
Zhang Y, Lian H, Yang G, Zhao S, Ni P, Chen H, Li C. Inaccurate-Supervised Learning With Generative Adversarial Nets. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1522-1536. [PMID: 34464286 DOI: 10.1109/tcyb.2021.3104848] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Inaccurate-supervised learning (ISL) is a weakly supervised learning framework for imprecise annotation, which is derived from some specific popular learning frameworks, mainly including partial label learning (PLL), partial multilabel learning (PML), and multiview PML (MVPML). While PLL, PML, and MVPML are each solved as independent models through different methods and no general framework can currently be applied to these frameworks, most existing methods for solving them were designed based on traditional machine-learning techniques, such as logistic regression, KNN, SVM, decision tree. Prior to this study, there was no single general framework that used adversarial networks to solve ISL problems. To narrow this gap, this study proposed an adversarial network structure to solve ISL problems, called ISL with generative adversarial nets (ISL-GANs). In ISL-GAN, fake samples, which are quite similar to real samples, gradually promote the Discriminator to disambiguate the noise labels of real samples. We also provide theoretical analyses for ISL-GAN in effectively handling ISL data. In this article, we propose a general framework to solve PLL, PML, and MVPML, while in the published conference version, we adopt the specific framework, which is a special case of the general one, to solve the PLL problem. Finally, the effectiveness is demonstrated through extensive experiments on various imprecise annotation learning tasks, including PLL, PML, and MVPML.
Collapse
|
7
|
Ren J, Qiu F, Hu H. Multiple sparse detection-based evolutionary algorithm for large-scale sparse multiobjective optimization problems. COMPLEX INTELL SYST 2023. [DOI: 10.1007/s40747-022-00963-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
AbstractSparse multiobjective optimization problems are common in practical applications. Such problems are characterized by large-scale decision variables and sparse optimal solutions. General large-scale multiobjective optimization problems (LSMOPs) have been extensively studied for many years. They can be well solved by many excellent custom algorithms. However, when these algorithms are used to deal with sparse LSMOPs, they often encounter difficulties because the sparse nature of the problem is not considered. Therefore, aiming at sparse LSMOPs, an algorithm based on multiple sparse detection is proposed in this paper. The algorithm applies an adaptive sparse genetic operator that can generate sparse solutions by detecting the sparsity of individuals. To improve the deficiency of sparse detection caused by local detection, an enhanced sparse detection (ESD) strategy is proposed in this paper. The strategy uses binary coefficient vectors to integrate the masks of nondominated solutions. Essentially, the mask is globally and deeply optimized by coefficient vectors to enhance the sparsity of the solutions. In addition, the algorithm adopts an improved weighted optimization strategy to fully optimize the key nonzero variables to balance exploration and optimization. Finally, the proposed algorithm is named MOEA-ESD and is compared to the current state-of-the-art algorithm to verify its effectiveness.
Collapse
|
8
|
Liu S, Lin Q, Tian Y, Tan KC. A Variable Importance-Based Differential Evolution for Large-Scale Multiobjective Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13048-13062. [PMID: 34406958 DOI: 10.1109/tcyb.2021.3098186] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Large-scale multiobjective optimization problems (LMOPs) bring significant challenges for traditional evolutionary operators, as their search capability cannot efficiently handle the huge decision space. Some newly designed search methods for LMOPs usually classify all variables into different groups and then optimize the variables in the same group with the same manner, which can speed up the population's convergence. Following this research direction, this article suggests a differential evolution (DE) algorithm that favors searching the variables with higher importance to the solving of LMOPs. The importance of each variable to the target LMOP is quantized and then all variables are categorized into different groups based on their importance. The variable groups with higher importance are allocated with more computational resources using DE. In this way, the proposed method can efficiently generate offspring in a low-dimensional search subspace formed by more important variables, which can significantly speed up the convergence. During the evolutionary process, this search subspace for DE will be expanded gradually, which can strike a good balance between exploration and exploitation in tackling LMOPs. Finally, the experiments validate that our proposed algorithm can perform better than several state-of-the-art evolutionary algorithms for solving various benchmark LMOPs.
Collapse
|
9
|
Liu S, Wang H, Yao W. A surrogate-assisted evolutionary algorithm with hypervolume triggered fidelity adjustment for noisy multiobjective integer programming. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
10
|
Bilodeau C, Jin W, Jaakkola T, Barzilay R, Jensen KF. Generative models for molecular discovery: Recent advances and challenges. WIRES COMPUTATIONAL MOLECULAR SCIENCE 2022. [DOI: 10.1002/wcms.1608] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Camille Bilodeau
- Department of Chemical Engineering Massachusetts Institute of Technology Cambridge Massachusetts USA
| | - Wengong Jin
- Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge Massachusetts USA
| | - Tommi Jaakkola
- Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge Massachusetts USA
| | - Regina Barzilay
- Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge Massachusetts USA
| | - Klavs F. Jensen
- Department of Chemical Engineering Massachusetts Institute of Technology Cambridge Massachusetts USA
| |
Collapse
|
11
|
Ma L, Ma Y, Lin Q, Ji J, Coello CAC, Gong M. SNEGAN: Signed Network Embedding by Using Generative Adversarial Nets. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2020.3035937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
12
|
Lin J, He C, Cheng R. Adaptive dropout for high-dimensional expensive multiobjective optimization. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-021-00362-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractVarious works have been proposed to solve expensive multiobjective optimization problems (EMOPs) using surrogate-assisted evolutionary algorithms (SAEAs) in recent decades. However, most existing methods focus on EMOPs with less than 30 decision variables, since a large number of training samples are required to build an accurate surrogate model for high-dimensional EMOPs, which is unrealistic for expensive multiobjective optimization. To address this issue, we propose an SAEA with an adaptive dropout mechanism. Specifically, this mechanism takes advantage of the statistical differences between different solution sets in the decision space to guide the selection of some crucial decision variables. A new infill criterion is then proposed to optimize the selected decision variables with the assistance of surrogate models. Moreover, the optimized decision variables are extended to new full-length solutions, and then the new candidate solutions are evaluated using expensive functions to update the archive. The proposed algorithm is tested on different benchmark problems with up to 200 decision variables compared to some state-of-the-art SAEAs. The experimental results have demonstrated the promising performance and computational efficiency of the proposed algorithm in high-dimensional expensive multiobjective optimization.
Collapse
|
13
|
|
14
|
Fan Y, Zhou Q, Zhang W, Bao S, Shen J. Determining learning direction via multi-controller model for stably searching generative adversarial networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.08.070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
15
|
Song PC, Chu SC, Pan JS, Yang H. Simplified Phasmatodea population evolution algorithm for optimization. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00402-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractThis work proposes a population evolution algorithm to deal with optimization problems based on the evolution characteristics of the Phasmatodea (stick insect) population, called the Phasmatodea population evolution algorithm (PPE). The PPE imitates the characteristics of convergent evolution, path dependence, population growth and competition in the evolution of the stick insect population in nature. The stick insect population tends to be the nearest dominant population in the evolution process, and the favorable evolution trend is more likely to be inherited by the next generation. This work combines population growth and competition models to achieve the above process. The implemented PPE has been tested and analyzed on 30 benchmark functions, and it has better performance than similar algorithms. This work uses several engineering optimization problems to test the algorithm and obtains good results.
Collapse
|
16
|
|
17
|
Multi-objective Combinatorial Generative Adversarial Optimization and Its Application in Crowdsensing. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7354827 DOI: 10.1007/978-3-030-53956-6_38] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
With the increasing of the decision variables in multi-objective combinatorial optimization problems, the traditional evolutionary algorithms perform worse due to the low efficiency for generating the offspring by a stochastic mechanism. To address the issue, a multi-objective combinatorial generative adversarial optimization method is proposed to make the algorithm capable of learning the implicit information embodied in the evolution process. After classifying the optimal non-dominated solutions in the current generation as real data, the generative adversarial network (GAN) is trained by them, with the purpose of learning their distribution information. The Adam algorithm that employs the adaptively learning rate for each parameter is introduced to update the main parameters of GAN. Following that, an offspring reproduction strategy is designed to form a new feasible solution from the decimal output of the generator. To further verify the rationality of the proposed method, it is applied to solve the participant selection problem of the crowdsensing and the detailed offspring reproduction strategy is given. The experimental results for the crowdsensing systems with various tasks and participants show that the proposed algorithm outperforms the others in both convergence and distribution.
Collapse
|