1
|
Lu Y, Xiao M, Wu X, Karimi HR, Xie X, Cao J, Zheng WX. Tipping prediction of a class of large-scale radial-ring neural networks. Neural Netw 2025; 181:106820. [PMID: 39490026 DOI: 10.1016/j.neunet.2024.106820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 09/23/2024] [Accepted: 10/13/2024] [Indexed: 11/05/2024]
Abstract
Understanding the emergence and evolution of collective dynamics in large-scale neural networks remains a complex challenge. This paper seeks to address this gap by applying dynamical systems theory, with a particular focus on tipping mechanisms. First, we introduce a novel (n+mn)-scale radial-ring neural network and employ Coates' flow graph topological approach to derive the characteristic equation of the linearized network. Second, through deriving stability conditions and predicting the tipping point using an algebraic approach based on the integral element concept, we identify critical factors such as the synaptic transmission delay, the self-feedback coefficient, and the network topology. Finally, we validate the methodology's effectiveness in predicting the tipping point. The findings reveal that increased synaptic transmission delay can induce and amplify periodic oscillations. Additionally, the self-feedback coefficient and the network topology influence the onset of tipping points. Moreover, the selection of activation function impacts both the number of equilibrium solutions and the convergence speed of the neural network. Lastly, we demonstrate that the proposed large-scale radial-ring neural network exhibits stronger robustness compared to lower-scale networks with a single topology. The results provide a comprehensive depiction of the dynamics observed in large-scale neural networks under the influence of various factor combinations.
Collapse
Affiliation(s)
- Yunxiang Lu
- College of Automation & College of Artificial Intelligence, Nanjing University of Posts and Telecommunications, Nanjing 210023, China.
| | - Min Xiao
- College of Automation & College of Artificial Intelligence, Nanjing University of Posts and Telecommunications, Nanjing 210023, China.
| | - Xiaoqun Wu
- College of Computer Science and Software Engineering, Shen Zhen University, Shen Zhen 518060, China.
| | - Hamid Reza Karimi
- Department of Mechanical Engineering, Politecnico di Milano, Milan 20156, Italy.
| | - Xiangpeng Xie
- Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing 210023, China.
| | - Jinde Cao
- School of Mathematics, Southeast University, Nanjing 210096, China.
| | - Wei Xing Zheng
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, NSW 2751, Australia.
| |
Collapse
|
2
|
Xue ZF, Wang ZJ, Zhan ZH, Kwong S, Zhang J. Neural Network-Based Knowledge Transfer for Multitask Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:7541-7554. [PMID: 39383079 DOI: 10.1109/tcyb.2024.3469371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/11/2024]
Abstract
Knowledge transfer (KT) is crucial for optimizing tasks in evolutionary multitask optimization (EMTO). However, most existing KT methods can only achieve superficial KT but lack the ability to deeply mine the similarities or relationships among different tasks. This limitation may result in negative transfer, thereby degrading the KT performance. As the KT efficiency strongly depends on the similarities of tasks, this article proposes a neural network (NN)-based KT (NNKT) method to analyze the similarities of tasks and obtain the transfer models for information prediction between different tasks for high-quality KT. First, NNKT collects and pairs the solutions of multiple tasks and trains the NNs to obtain the transfer models between tasks. Second, the obtained NNs transfer knowledge by predicting new promising solutions. Meanwhile, a simple adaptive strategy is developed to find the suitable population size to satisfy various search requirements during the evolution process. Comparison of the experimental results between the proposed NN-based multitask optimization (NNMTO) algorithm and some state-of-the-art multitask algorithms on the IEEE Congress on Evolutionary Computation (IEEE CEC) 2017 and IEEE CEC2022 benchmarks demonstrate the efficiency and effectiveness of the NNMTO. Moreover, NNKT can be seamlessly applied to other EMTO algorithms to further enhance their performances. Finally, the NNMTO is applied to a real-world multitask rover navigation application problem to further demonstrate its applicability.
Collapse
|
3
|
Lu Y, Xiao M, He J, Wang Z. Stability and Bifurcation Exploration of Delayed Neural Networks With Radial-Ring Configuration and Bidirectional Coupling. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10326-10337. [PMID: 37022404 DOI: 10.1109/tnnls.2023.3240403] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
For decades, studying the dynamic performances of artificial neural networks (ANNs) is widely considered to be a good way to gain a deeper insight into actual neural networks. However, most models of ANNs are focused on a finite number of neurons and a single topology. These studies are inconsistent with actual neural networks composed of thousands of neurons and sophisticated topologies. There is still a discrepancy between theory and practice. In this article, not only a novel construction of a class of delayed neural networks with radial-ring configuration and bidirectional coupling is proposed, but also an effective analytical approach to dynamic performances of large-scale neural networks with a cluster of topologies is developed. First, Coates' flow diagram is applied to acquire the characteristic equation of the system, which contains multiple exponential terms. Second, by means of the idea of the holistic element, the sum of the neuron synapse transmission delays is regarded as the bifurcation argument to investigate the stability of the zero equilibrium point and the beingness of Hopf bifurcation. Finally, multiple sets of computerized simulations are utilized to confirm the conclusions. The simulation results expound that the increase in transmission delay may cause a leading impact on the generation of Hopf bifurcation. Meanwhile, the number and the self-feedback coefficient of neurons are also playing significant roles in the appearance of periodic oscillations.
Collapse
|
4
|
Yang C, Ding J, Jin Y, Chai T. A Data Stream Ensemble Assisted Multifactorial Evolutionary Algorithm for Offline Data-Driven Dynamic Optimization. EVOLUTIONARY COMPUTATION 2023; 31:433-458. [PMID: 37155647 DOI: 10.1162/evco_a_00332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 04/01/2023] [Indexed: 05/10/2023]
Abstract
Existing work on offline data-driven optimization mainly focuses on problems in static environments, and little attention has been paid to problems in dynamic environments. Offline data-driven optimization in dynamic environments is a challenging problem because the distribution of collected data varies over time, requiring surrogate models and optimal solutions tracking with time. This paper proposes a knowledge-transfer-based data-driven optimization algorithm to address these issues. First, an ensemble learning method is adopted to train surrogate models to leverage the knowledge of data in historical environments as well as adapt to new environments. Specifically, given data in a new environment, a model is constructed with the new data, and the preserved models of historical environments are further trained with the new data. Then, these models are considered to be base learners and combined as an ensemble surrogate model. After that, all base learners and the ensemble surrogate model are simultaneously optimized in a multitask environment for finding optimal solutions for real fitness functions. In this way, the optimization tasks in the previous environments can be used to accelerate the tracking of the optimum in the current environment. Since the ensemble model is the most accurate surrogate, we assign more individuals to the ensemble surrogate than its base learners. Empirical results on six dynamic optimization benchmark problems demonstrate the effectiveness of the proposed algorithm compared with four state-of-the-art offline data-driven optimization algorithms. Code is available at https://github.com/Peacefulyang/DSE_MFS.git.
Collapse
Affiliation(s)
- Cuie Yang
- State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University, Shenyang 110819, China
| | - Jinliang Ding
- State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University, Shenyang 110819, China
| | - Yaochu Jin
- Bielefeld University, 33619 Bielefeld, Germany State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University, Shenyang 110819, China
| | - Tianyou Chai
- State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University, Shenyang 110819, China
| |
Collapse
|
5
|
Wu SH, Zhan ZH, Tan KC, Zhang J. Transferable Adaptive Differential Evolution for Many-Task Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7295-7308. [PMID: 37022822 DOI: 10.1109/tcyb.2023.3234969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The evolutionary multitask optimization (EMTO) algorithm is a promising approach to solve many-task optimization problems (MaTOPs), in which similarity measurement and knowledge transfer (KT) are two key issues. Many existing EMTO algorithms estimate the similarity of population distribution to select a set of similar tasks and then perform KT by simply mixing individuals among the selected tasks. However, these methods may be less effective when the global optima of the tasks greatly differ from each other. Therefore, this article proposes to consider a new kind of similarity, namely, shift invariance, between tasks. The shift invariance is defined that the two tasks are similar after linear shift transformation on both the search space and the objective space. To identify and utilize the shift invariance between tasks, a two-stage transferable adaptive differential evolution (TRADE) algorithm is proposed. In the first evolution stage, a task representation strategy is proposed to represent each task by a vector that embeds the evolution information. Then, a task grouping strategy is proposed to group the similar (i.e., shift invariant) tasks into the same group while the dissimilar tasks into different groups. In the second evolution stage, a novel successful evolution experience transfer method is proposed to adaptively utilize the suitable parameters by transferring successful parameters among similar tasks within the same group. Comprehensive experiments are carried out on two representative MaTOP benchmarks with a total of 16 instances and a real-world application. The comparative results show that the proposed TRADE is superior to some state-of-the-art EMTO algorithms and single-task optimization algorithms.
Collapse
|
6
|
Han H, Liu H, Yang C, Qiao J. Transfer Learning Algorithm With Knowledge Division Level. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:8602-8616. [PMID: 35230958 DOI: 10.1109/tnnls.2022.3151646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
One of the major challenges of transfer learning algorithms is the domain drifting problem where the knowledge of source scene is inappropriate for the task of target scene. To solve this problem, a transfer learning algorithm with knowledge division level (KDTL) is proposed to subdivide knowledge of source scene and leverage them with different drifting degrees. The main properties of KDTL are three folds. First, a comparative evaluation mechanism is developed to detect and subdivide the knowledge into three kinds-the ineffective knowledge, the usable knowledge, and the efficient knowledge. Then, the ineffective and usable knowledge can be found to avoid the negative transfer problem. Second, an integrated framework is designed to prune the ineffective knowledge in the elastic layer, reconstruct the usable knowledge in the refined layer, and learn the efficient knowledge in the leveraged layer. Then, the efficient knowledge can be acquired to improve the learning performance. Third, the theoretical analysis of the proposed KDTL is analyzed in different phases. Then, the convergence property, error bound, and computational complexity of KDTL are provided for the successful applications. Finally, the proposed KDTL is tested by several benchmark problems and some real problems. The experimental results demonstrate that this proposed KDTL can achieve significant improvement over some state-of-the-art algorithms.
Collapse
|
7
|
Bin W. A Novel Supply Chain Multi-level Inventory Model based on Improved PSO Algorithm. 2023 8TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND ELECTRONICS SYSTEMS (ICCES) 2023. [DOI: 10.1109/icces57224.2023.10192672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
- Wang Bin
- Shandong Vocational College Of Light Industry,Shandong,China
| |
Collapse
|
8
|
Li JY, Zhan ZH, Xu J, Kwong S, Zhang J. Surrogate-Assisted Hybrid-Model Estimation of Distribution Algorithm for Mixed-Variable Hyperparameters Optimization in Convolutional Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2338-2352. [PMID: 34543206 DOI: 10.1109/tnnls.2021.3106399] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The performance of a convolutional neural network (CNN) heavily depends on its hyperparameters. However, finding a suitable hyperparameters configuration is difficult, challenging, and computationally expensive due to three issues, which are 1) the mixed-variable problem of different types of hyperparameters; 2) the large-scale search space of finding optimal hyperparameters; and 3) the expensive computational cost for evaluating candidate hyperparameters configuration. Therefore, this article focuses on these three issues and proposes a novel estimation of distribution algorithm (EDA) for efficient hyperparameters optimization, with three major contributions in the algorithm design. First, a hybrid-model EDA is proposed to efficiently deal with the mixed-variable difficulty. The proposed algorithm uses a mixed-variable encoding scheme to encode the mixed-variable hyperparameters and adopts an adaptive hybrid-model learning (AHL) strategy to efficiently optimize the mixed-variables. Second, an orthogonal initialization (OI) strategy is proposed to efficiently deal with the challenge of large-scale search space. Third, a surrogate-assisted multi-level evaluation (SME) method is proposed to reduce the expensive computational cost. Based on the above, the proposed algorithm is named s urrogate-assisted hybrid-model EDA (SHEDA). For experimental studies, the proposed SHEDA is verified on widely used classification benchmark problems, and is compared with various state-of-the-art methods. Moreover, a case study on aortic dissection (AD) diagnosis is carried out to evaluate its performance. Experimental results show that the proposed SHEDA is very effective and efficient for hyperparameters optimization, which can find a satisfactory hyperparameters configuration for the CIFAR10, CIFAR100, and AD diagnosis with only 0.58, 0.97, and 1.18 GPU days, respectively.
Collapse
|
9
|
Li JY, Du KJ, Zhan ZH, Wang H, Zhang J. Distributed Differential Evolution With Adaptive Resource Allocation. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:2791-2804. [PMID: 35286273 DOI: 10.1109/tcyb.2022.3153964] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Distributed differential evolution (DDE) is an efficient paradigm that adopts multiple populations for cooperatively solving complex optimization problems. However, how to allocate fitness evaluation (FE) budget resources among the distributed multiple populations can greatly influence the optimization ability of DDE. Therefore, this article proposes a novel three-layer DDE framework with adaptive resource allocation (DDE-ARA), including the algorithm layer for evolving various differential evolution (DE) populations, the dispatch layer for dispatching the individuals in the DE populations to different distributed machines, and the machine layer for accommodating distributed computers. In the DDE-ARA framework, three novel methods are further proposed. First, a general performance indicator (GPI) method is proposed to measure the performance of different DEs. Second, based on the GPI, a FE allocation (FEA) method is proposed to adaptively allocate the FE budget resources from poorly performing DEs to well-performing DEs for better search efficiency. This way, the GPI and FEA methods achieve the ARA in the algorithm layer. Third, a load balance strategy is proposed in the dispatch layer to balance the FE burden of different computers in the machine layer for improving load balance and algorithm speedup. Moreover, theoretical analyses are provided to show why the proposed DDE-ARA framework can be effective and to discuss the lower bound of its optimization error. Extensive experiments are conducted on all the 30 functions of CEC 2014 competitions at 10, 30, 50, and 100 dimensions, and some state-of-the-art DDE algorithms are adopted for comparisons. The results show the great effectiveness and efficiency of the proposed framework and the three novel methods.
Collapse
|
10
|
Huang PQ, Wang Y, Wang K, Zhang Q. Combining Lyapunov Optimization With Evolutionary Transfer Optimization for Long-Term Energy Minimization in IRS-Aided Communications. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:2647-2657. [PMID: 35533155 DOI: 10.1109/tcyb.2022.3168839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article studies an intelligent reflecting surface (IRS)-aided communication system under the time-varying channels and stochastic data arrivals. In this system, we jointly optimize the phase-shift coefficient and the transmit power in sequential time slots to maximize the long-term energy consumption for all mobile devices while ensuring queue stability. Due to the dynamic environment, it is challenging to ensure queue stability. In addition, making real-time decisions in each short time slot also needs to be considered. To this end, we propose a method (called LETO) that combines Lyapunov optimization with evolutionary transfer optimization (ETO) to solve the above optimization problem. LETO first adopts Lyapunov optimization to decouple the long-term stochastic optimization problem into deterministic optimization problems in sequential time slots. As a result, it can ensure queue stability since the deterministic optimization problem in each time slot does not involve future information. After that, LETO develops an evolutionary transfer method to solve the optimization problem in each time slot. Specifically, we first define a metric to identify the optimization problems in past time slots similar to that in the current time slot, and then transfer their optimal solutions to construct a high-quality initial population in the current time slot. Since ETO effectively accelerates the search, we can make real-time decisions in each short time slot. Experimental studies verify the effectiveness of LETO by comparison with other algorithms.
Collapse
|
11
|
Liu XF, Zhang J, Wang J. Cooperative Particle Swarm Optimization With a Bilevel Resource Allocation Mechanism for Large-Scale Dynamic Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1000-1011. [PMID: 35976831 DOI: 10.1109/tcyb.2022.3193888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although cooperative coevolutionary algorithms are developed for large-scale dynamic optimization via subspace decomposition, they still face difficulties in reacting to environmental changes, in the presence of multiple peaks in the fitness functions and unevenness of subproblems. The resource allocation mechanisms among subproblems in the existing algorithms rely mainly on the fitness improvements already made but not potential ones. On the one hand, there is a lack of sufficient computing resources to achieve potential fitness improvements for some hard subproblems. On the other hand, the existing algorithms waste computing resources aiming to find most of the local optima of problems. In this article, we propose a cooperative particle swarm optimization algorithm to address these issues by introducing a bilevel balanceable resource allocation mechanism. A search strategy in the lower level is introduced to select some promising solutions from an archive based on solution diversity and quality to identify new peaks in every subproblem. A resource allocation strategy in the upper level is introduced to balance the coevolution of multiple subproblems by referring to their historical improvements and more computing resources are allocated for solving the subproblems that perform poorly but are expected to make great fitness improvements. Experimental results demonstrate that the proposed algorithm is competitive with the state-of-the-art algorithms in terms of objective function values and response efficiency with respect to environmental changes.
Collapse
|
12
|
Liu SC, Zhan ZH, Tan KC, Zhang J. A Multiobjective Framework for Many-Objective Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13654-13668. [PMID: 34398770 DOI: 10.1109/tcyb.2021.3082200] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
It is known that many-objective optimization problems (MaOPs) often face the difficulty of maintaining good diversity and convergence in the search process due to the high-dimensional objective space. To address this issue, this article proposes a novel multiobjective framework for many-objective optimization (Mo4Ma), which transforms the many-objective space into multiobjective space. First, the many objectives are transformed into two indicative objectives of convergence and diversity. Second, a clustering-based sequential selection strategy is put forward in the transformed multiobjective space to guide the evolutionary search process. Specifically, the selection is circularly performed on the clustered subpopulations to maintain population diversity. In each round of selection, solutions with good performance in the transformed multiobjective space will be chosen to improve the overall convergence. The Mo4Ma is a generic framework that any type of evolutionary computation algorithm can incorporate compatibly. In this article, the differential evolution (DE) is adopted as the optimizer in the Mo4Ma framework, thus resulting in an Mo4Ma-DE algorithm. Experimental results show that the Mo4Ma-DE algorithm can obtain well-converged and widely distributed Pareto solutions along with the many-objective Pareto sets of the original MaOPs. Compared with seven state-of-the-art MaOP algorithms, the proposed Mo4Ma-DE algorithm shows strong competitiveness and general better performance.
Collapse
|
13
|
Liu XF, Zhan ZH, Zhang J. Resource-Aware Distributed Differential Evolution for Training Expensive Neural-Network-Based Controller in Power Electronic Circuit. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6286-6296. [PMID: 33961568 DOI: 10.1109/tnnls.2021.3075205] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The neural-network (NN)-based control method is a new emerging promising technique for controller design in a power electronic circuit (PEC). However, the optimization of NN-based controllers (NNCs) has significant challenges in two aspects. The first challenge is that the search space of the NNC optimization problem is such complex that the global optimization ability of the existing algorithms still needs to be improved. The second challenge is that the training process of the NNC parameters is very computationally expensive and requires a long execution time. Thus, in this article, we develop a powerful evolutionary computation-based algorithm to find a high-quality solution and reduce computational time. First, the differential evolution (DE) algorithm is adopted because it is a powerful global optimizer in solving a complex optimization problem. This can help to overcome the premature convergence in local optima to train the NNC parameters well. Second, to reduce the computational time, the DE is extended to distribute DE (DDE) by dispatching all the individuals to different distributed computing resources for parallel computing. Moreover, a resource-aware strategy (RAS) is designed to further efficiently utilize the resources by adaptively dispatching individuals to resources according to the real-time performance of the resources, which can simultaneously concern the computing ability and load state of each resource. Experimental results show that, compared with some other typical evolutionary algorithms, the proposed algorithm can get significantly better solutions within a shorter computational time.
Collapse
|
14
|
Chen ZG, Zhan ZH, Kwong S, Zhang J. Evolutionary Computation for Intelligent Transportation in Smart Cities: A Survey [Review Article]. IEEE COMPUT INTELL M 2022. [DOI: 10.1109/mci.2022.3155330] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
15
|
|
16
|
Li JY, Zhan ZH, Zhang J. Evolutionary Computation for Expensive Optimization: A Survey. MACHINE INTELLIGENCE RESEARCH 2022. [PMCID: PMC8777172 DOI: 10.1007/s11633-022-1317-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Expensive optimization problem (EOP) widely exists in various significant real-world applications. However, EOP requires expensive or even unaffordable costs for evaluating candidate solutions, which is expensive for the algorithm to find a satisfactory solution. Moreover, due to the fast-growing application demands in the economy and society, such as the emergence of the smart cities, the internet of things, and the big data era, solving EOP more efficiently has become increasingly essential in various fields, which poses great challenges on the problem-solving ability of optimization approach for EOP. Among various optimization approaches, evolutionary computation (EC) is a promising global optimization tool widely used for solving EOP efficiently in the past decades. Given the fruitful advancements of EC for EOP, it is essential to review these advancements in order to synthesize and give previous research experiences and references to aid the development of relevant research fields and real-world applications. Motivated by this, this paper aims to provide a comprehensive survey to show why and how EC can solve EOP efficiently. For this aim, this paper firstly analyzes the total optimization cost of EC in solving EOP. Then, based on the analysis, three promising research directions are pointed out for solving EOP, which are problem approximation and substitution, algorithm design and enhancement, and parallel and distributed computation. Note that, to the best of our knowledge, this paper is the first that outlines the possible directions for efficiently solving EOP by analyzing the total expensive cost. Based on this, existing works are reviewed comprehensively via a taxonomy with four parts, including the above three research directions and the real-world application part. Moreover, some future research directions are also discussed in this paper. It is believed that such a survey can attract attention, encourage discussions, and stimulate new EC research ideas for solving EOP and related real-world applications more efficiently.
Collapse
|
17
|
Zhou Q, Zhao D, Shuai B, Li Y, Williams H, Xu H. Knowledge Implementation and Transfer With an Adaptive Learning Network for Real-Time Power Management of the Plug-in Hybrid Vehicle. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:5298-5308. [PMID: 34260359 DOI: 10.1109/tnnls.2021.3093429] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Essential decision-making tasks such as power management in future vehicles will benefit from the development of artificial intelligence technology for safe and energy-efficient operations. To develop the technique of using neural network and deep learning in energy management of the plug-in hybrid vehicle and evaluate its advantage, this article proposes a new adaptive learning network that incorporates a deep deterministic policy gradient (DDPG) network with an adaptive neuro-fuzzy inference system (ANFIS) network. First, the ANFIS network is built using a new global K-fold fuzzy learning (GKFL) method for real-time implementation of the offline dynamic programming result. Then, the DDPG network is developed to regulate the input of the ANFIS network with the real-world reinforcement signal. The ANFIS and DDPG networks are integrated to maximize the control utility (CU), which is a function of the vehicle's energy efficiency and the battery state-of-charge. Experimental studies are conducted to testify the performance and robustness of the DDPG-ANFIS network. It has shown that the studied vehicle with the DDPG-ANFIS network achieves 8% higher CU than using the MATLAB ANFIS toolbox on the studied vehicle. In five simulated real-world driving conditions, the DDPG-ANFIS network increased the maximum mean CU value by 138% over the ANFIS-only network and 5% over the DDPG-only network.
Collapse
|
18
|
Abstract
In the last few years, intensive research has been done to enhance artificial intelligence (AI) using optimization techniques. In this paper, we present an extensive review of artificial neural networks (ANNs) based optimization algorithm techniques with some of the famous optimization techniques, e.g., genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), and backtracking search algorithm (BSA) and some modern developed techniques, e.g., the lightning search algorithm (LSA) and whale optimization algorithm (WOA), and many more. The entire set of such techniques is classified as algorithms based on a population where the initial population is randomly created. Input parameters are initialized within the specified range, and they can provide optimal solutions. This paper emphasizes enhancing the neural network via optimization algorithms by manipulating its tuned parameters or training parameters to obtain the best structure network pattern to dissolve the problems in the best way. This paper includes some results for improving the ANN performance by PSO, GA, ABC, and BSA optimization techniques, respectively, to search for optimal parameters, e.g., the number of neurons in the hidden layers and learning rate. The obtained neural net is used for solving energy management problems in the virtual power plant system.
Collapse
|
19
|
Ge YF, Orlowska M, Cao J, Wang H, Zhang Y. Knowledge transfer-based distributed differential evolution for dynamic database fragmentation. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107325] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
20
|
Li JY, Zhan ZH, Wang H, Zhang J. Data-Driven Evolutionary Algorithm With Perturbation-Based Ensemble Surrogates. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3925-3937. [PMID: 32776886 DOI: 10.1109/tcyb.2020.3008280] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Data-driven evolutionary algorithms (DDEAs) aim to utilize data and surrogates to drive optimization, which is useful and efficient when the objective function of the optimization problem is expensive or difficult to access. However, the performance of DDEAs relies on their surrogate quality and often deteriorates if the amount of available data decreases. To solve these problems, this article proposes a new DDEA framework with perturbation-based ensemble surrogates (DDEA-PES), which contain two efficient mechanisms. The first is a diverse surrogate generation method that can generate diverse surrogates through performing data perturbations on the available data. The second is a selective ensemble method that selects some of the prebuilt surrogates to form a final ensemble surrogate model. By combining these two mechanisms, the proposed DDEA-PES framework has three advantages, including larger data quantity, better data utilization, and higher surrogate accuracy. To validate the effectiveness of the proposed framework, this article provides both theoretical and experimental analyses. For the experimental comparisons, a specific DDEA-PES algorithm is developed as an instance by adopting a genetic algorithm as the optimizer and radial basis function neural networks as the base models. The experimental results on widely used benchmarks and an aerodynamic airfoil design real-world optimization problem show that the proposed DDEA-PES algorithm outperforms some state-of-the-art DDEAs. Moreover, when compared with traditional nondata-driven methods, the proposed DDEA-PES algorithm only requires about 2% computational budgets to produce competitive results.
Collapse
|
21
|
Abstract
AbstractComplex continuous optimization problems widely exist nowadays due to the fast development of the economy and society. Moreover, the technologies like Internet of things, cloud computing, and big data also make optimization problems with more challenges including Many-dimensions, Many-changes, Many-optima, Many-constraints, and Many-costs. We term these as 5-M challenges that exist in large-scale optimization problems, dynamic optimization problems, multi-modal optimization problems, multi-objective optimization problems, many-objective optimization problems, constrained optimization problems, and expensive optimization problems in practical applications. The evolutionary computation (EC) algorithms are a kind of promising global optimization tools that have not only been widely applied for solving traditional optimization problems, but also have emerged booming research for solving the above-mentioned complex continuous optimization problems in recent years. In order to show how EC algorithms are promising and efficient in dealing with the 5-M complex challenges, this paper presents a comprehensive survey by proposing a novel taxonomy according to the function of the approaches, including reducing problem difficulty, increasing algorithm diversity, accelerating convergence speed, reducing running time, and extending application field. Moreover, some future research directions on using EC algorithms to solve complex continuous optimization problems are proposed and discussed. We believe that such a survey can draw attention, raise discussions, and inspire new ideas of EC research into complex continuous optimization problems and real-world applications.
Collapse
|
22
|
Jiang M, Wang Z, Qiu L, Guo S, Gao X, Tan KC. A Fast Dynamic Evolutionary Multiobjective Algorithm via Manifold Transfer Learning. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3417-3428. [PMID: 32452785 DOI: 10.1109/tcyb.2020.2989465] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Many real-world optimization problems involve multiple objectives, constraints, and parameters that may change over time. These problems are often called dynamic multiobjective optimization problems (DMOPs). The difficulty in solving DMOPs is the need to track the changing Pareto-optimal front efficiently and accurately. It is known that transfer learning (TL)-based methods have the advantage of reusing experiences obtained from past computational processes to improve the quality of current solutions. However, existing TL-based methods are generally computationally intensive and thus time consuming. This article proposes a new memory-driven manifold TL-based evolutionary algorithm for dynamic multiobjective optimization (MMTL-DMOEA). The method combines the mechanism of memory to preserve the best individuals from the past with the feature of manifold TL to predict the optimal individuals at the new instance during the evolution. The elites of these individuals obtained from both past experience and future prediction will then constitute as the initial population in the optimization process. This strategy significantly improves the quality of solutions at the initial stage and reduces the computational cost required in existing methods. Different benchmark problems are used to validate the proposed algorithm and the simulation results are compared with state-of-the-art dynamic multiobjective optimization algorithms (DMOAs). The results show that our approach is capable of improving the computational speed by two orders of magnitude while achieving a better quality of solutions than existing methods.
Collapse
|
23
|
Tao B, Xiao M, Zheng WX, Cao J, Tang J. Dynamics Analysis and Design for a Bidirectional Super-Ring-Shaped Neural Network With n Neurons and Multiple Delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2978-2992. [PMID: 32726281 DOI: 10.1109/tnnls.2020.3009166] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Recently, the dynamics of delayed neural networks has always incurred the widespread concern of scholars. However, they are mostly confined to some simplified neural networks, which are only made up of a small amount of neurons. The main cause is that it is difficult to decompose and analyze generally high-dimensional characteristic matrices. In this article, for the first time, we can solve the computing issues of high-dimensional eigenmatrix by employing the formula of Coates flow graph, and the dynamics is considered for a bidirectional neural network with super-ring structure and multiple delays. Under certain circumstances, the characteristic equation of the linearized network can be transformed into the equation with integration element. By analyzing the equation, we find that the self-feedback coefficient and the delays have significant effects on the stability and Hopf bifurcation of the network. Then, we achieve some sufficient conditions of the stability and Hopf bifurcation on the network. Furthermore, the obtained conclusions are applied to design a standardized high-dimensional network with bidirectional ring structure, and the scale of the standardized high-dimensional network can be easily extended or reduced. Afterward, we propose some designing schemes to expand and reduce the dimension of the standardized high-dimensional network. Finally, the results of theories are coincident with that of experiments.
Collapse
|
24
|
Zhan ZH, Zhang J, Lin Y, Li JY, Huang T, Guo XQ, Wei FF, Kwong S, Zhang XY, You R. Matrix-Based Evolutionary Computation. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2021. [DOI: 10.1109/tetci.2020.3047410] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
25
|
Zhang X, Du KJ, Zhan ZH, Kwong S, Gu TL, Zhang J. Cooperative Coevolutionary Bare-Bones Particle Swarm Optimization With Function Independent Decomposition for Large-Scale Supply Chain Network Design With Uncertainties. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:4454-4468. [PMID: 31545754 DOI: 10.1109/tcyb.2019.2937565] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Supply chain network design (SCND) is a complicated constrained optimization problem that plays a significant role in the business management. This article extends the SCND model to a large-scale SCND with uncertainties (LUSCND), which is more practical but also more challenging. However, it is difficult for traditional approaches to obtain the feasible solutions in the large-scale search space within the limited time. This article proposes a cooperative coevolutionary bare-bones particle swarm optimization (CCBBPSO) with function independent decomposition (FID), called CCBBPSO-FID, for a multiperiod three-echelon LUSCND problem. For the large-scale issue, binary encoding of the original model is converted to integer encoding for dimensionality reduction, and a novel FID is designed to efficiently decompose the problem. For obtaining the feasible solutions, two repair methods are designed to repair the infeasible solutions that appear frequently in the LUSCND problem. A step translation method is proposed to deal with the variables out of bounds, and a labeled reposition operator with adaptive probabilities is designed to repair the infeasible solutions that violate the constraints. Experiments are conducted on 405 instances with three different scales. The results show that CCBBPSO-FID has an evident superiority over contestant algorithms.
Collapse
|
26
|
Wang ZJ, Zhan ZH, Yu WJ, Lin Y, Zhang J, Gu TL, Zhang J. Dynamic Group Learning Distributed Particle Swarm Optimization for Large-Scale Optimization and Its Application in Cloud Workflow Scheduling. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:2715-2729. [PMID: 31545753 DOI: 10.1109/tcyb.2019.2933499] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Cloud workflow scheduling is a significant topic in both commercial and industrial applications. However, the growing scale of workflow has made such a scheduling problem increasingly challenging. Many current algorithms often deal with small- or medium-scale problems (e.g., less than 1000 tasks) and face difficulties in providing satisfactory solutions when dealing with the large-scale problems, due to the curse of dimensionality. To this aim, this article proposes a dynamic group learning distributed particle swarm optimization (DGLDPSO) for large-scale optimization and extends it for the large-scale cloud workflow scheduling. DGLDPSO is efficient for large-scale optimization due to its following two advantages. First, the entire population is divided into many groups, and these groups are coevolved by using the master-slave multigroup distributed model, forming a distributed PSO (DPSO) to enhance the algorithm diversity. Second, a dynamic group learning (DGL) strategy is adopted for DPSO to balance diversity and convergence. When applied DGLDPSO into the large-scale cloud workflow scheduling, an adaptive renumber strategy (ARS) is further developed to make solutions relate to the resource characteristic and to make the searching behavior meaningful rather than aimless. Experiments are conducted on the large-scale benchmark functions set and the large-scale cloud workflow scheduling instances to further investigate the performance of DGLDPSO. The comparison results show that DGLDPSO is better than or at least comparable to other state-of-the-art large-scale optimization algorithms and workflow scheduling algorithms.
Collapse
|
27
|
Jian JR, Zhan ZH, Zhang J. Large-scale evolutionary optimization: a survey and experimental comparative study. INT J MACH LEARN CYB 2019. [DOI: 10.1007/s13042-019-01030-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
28
|
Zhan ZH, Wang ZJ, Jin H, Zhang J. Adaptive Distributed Differential Evolution. IEEE TRANSACTIONS ON CYBERNETICS 2019; 50:4633-4647. [PMID: 31634855 DOI: 10.1109/tcyb.2019.2944873] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Due to the increasing complexity of optimization problems, distributed differential evolution (DDE) has become a promising approach for global optimization. However, similar to the centralized algorithms, DDE also faces the difficulty of strategies' selection and parameters' setting. To deal with such problems effectively, this article proposes an adaptive DDE (ADDE) to relieve the sensitivity of strategies and parameters. In ADDE, three populations called exploration population, exploitation population, and balance population are co-evolved concurrently by using the master-slave multipopulation distributed framework. Different populations will adaptively choose their suitable mutation strategies based on the evolutionary state estimation to make full use of the feedback information from both individuals and the whole corresponding population. Besides, the historical successful experience and best solution improvement are collected and used to adaptively update the individual parameters (amplification factor F and crossover rate CR) and population parameter (population size N), respectively. The performance of ADDE is evaluated on all 30 widely used benchmark functions from the CEC 2014 test suite and all 22 widely used real-world application problems from the CEC 2011 test suite. The experimental results show that ADDE has great superiority compared with the other state-of-the-art DDE and adaptive differential evolution variants.
Collapse
|