A Multivariation Multifactorial Evolutionary Algorithm for Large-Scale Multiobjective Optimization

For solving large-scale multiobjective problems (LSMOPs), the transformation-based methods have shown promising search efficiency, which varies the original problem as a new simplified problem and performs the optimization in simplified spaces instead …

For solving large-scale multiobjective problems (LSMOPs), the transformation-based methods have shown promising search efficiency, which varies the original problem as a new simplified problem and performs the optimization in simplified spaces instead of the original problem space. Owing to the useful information provided by the simplified searching space, the performance of LSMOPs has been improved to some extent. However, it is worth noting that the original problem has changed after the variation, and there is thus no guarantee of the preservation of the original global or near-global optimum in the newly generated space. In this article, we propose to solve LSMOPs via a multivariation multifactorial evolutionary algorithm. In contrast to existing transformation-based methods, the proposed approach intends to conduct an evolutionary search on both the original space of the LSMOP and multiple simplified spaces constructed in a multivariation manner concurrently. In this way, useful traits found along the search can be seamlessly transferred from the simplified problem spaces to the original problem space toward efficient problem solving. Besides, since the evolutionary search is also performed in the original problem space, preserving the original global optimal solution can be guaranteed. To evaluate the performance of the proposed framework, comprehensive empirical studies are carried out on a set of LSMOPs with two to three objectives and 500–5000 variables. The experimental results highlight the efficiency and effectiveness of the proposed method compared to the state-of-the-art methods for large-scale multiobjective optimization.

Adaptive Multifactorial Evolutionary Optimization for Multitask Reinforcement Learning

Evolutionary computation has largely exhibited its potential to complement conventional learning algorithms in a variety of machine learning tasks, especially those related to unsupervised (clustering) and supervised learning. It has not been until lat…

Evolutionary computation has largely exhibited its potential to complement conventional learning algorithms in a variety of machine learning tasks, especially those related to unsupervised (clustering) and supervised learning. It has not been until lately when the computational efficiency of evolutionary solvers has been put in prospective for training reinforcement learning models. However, most studies framed so far within this context have considered environments and tasks conceived in isolation, without any exchange of knowledge among related tasks. In this manuscript we present A-MFEA-RL, an adaptive version of the well-known MFEA algorithm whose search and inheritance operators are tailored for multitask reinforcement learning environments. Specifically, our approach includes crossover and inheritance mechanisms for refining the exchange of genetic material, which rely on the multilayered structure of modern deep-learning-based reinforcement learning models. In order to assess the performance of the proposed approach, we design an extensive experimental setup comprising multiple reinforcement learning environments of varying levels of complexity, over which the performance of A-MFEA-RL is compared to that furnished by alternative nonevolutionary multitask reinforcement learning approaches. As concluded from the discussion of the obtained results, A-MFEA-RL not only achieves competitive success rates over the simultaneously addressed tasks, but also fosters the exchange of knowledge among tasks that could be intuitively expected to keep a degree of synergistic relationship.

Evolutionary Competitive Multitasking Optimization

This article introduces a special multitasking optimization problem (MTOP) called the competitive MTOP (CMTOP). Its distinctive characteristics are that all tasks’ objectives are comparable, and its optimal solution is the best one among the opt…

This article introduces a special multitasking optimization problem (MTOP) called the competitive MTOP (CMTOP). Its distinctive characteristics are that all tasks’ objectives are comparable, and its optimal solution is the best one among the optimal solutions of all the individual problems. This article proposes an evolutionary algorithm with an online resource allocation strategy and an adaptive information transfer mechanism to solve the CMTOP. The experimental results on benchmark and real-world problems show that our proposed algorithm is effective and efficient.

An Evolutionary Multitasking Optimization Framework for Constrained Multiobjective Optimization Problems

When addressing constrained multiobjective optimization problems (CMOPs) via evolutionary algorithms, various constraints and multiple objectives need to be satisfied and optimized simultaneously, which causes difficulties for the solver. In this artic…

When addressing constrained multiobjective optimization problems (CMOPs) via evolutionary algorithms, various constraints and multiple objectives need to be satisfied and optimized simultaneously, which causes difficulties for the solver. In this article, an evolutionary multitasking (EMT)-based constrained multiobjective optimization (EMCMO) framework is developed to solve CMOPs. In EMCMO, the optimization of a CMOP is transformed into two related tasks: one task is for the original CMOP, and the other task is only for the objectives by ignoring all constraints. The main purpose of the second task is to continuously provide useful knowledge of objectives to the first task, thus facilitating solving the CMOP. Specially, the genes carried by parent individuals or offspring individuals are dynamically regarded as useful knowledge due to the different complementarities of the two tasks. Moreover, the useful knowledge is found by the designed tentative method and transferred to improve the performance of the two tasks. To the best of our knowledge, this is the first attempt to use EMT to solve CMOPs. To verify the performance of EMCMO, an instance of EMCMO is obtained by employing a genetic algorithm as the optimizer. Comprehensive experiments are conducted on four benchmark test suites to verify the effectiveness of knowledge transfer. Furthermore, compared with other state-of-the-art constrained multiobjective optimization algorithms, EMCMO can produce better or at least comparable performance.

IEEE Transactions on Evolutionary Computation Society Information

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Multifactorial Evolutionary Algorithm Based on Improved Dynamical Decomposition for Many-Objective Optimization Problems

In multiobjective optimization, it is generally known that the boom in computational complexity and search spaces came with a rise in the number of objectives, and this leads to a decrease in selection pressure and the deterioration of the evolutionary…

In multiobjective optimization, it is generally known that the boom in computational complexity and search spaces came with a rise in the number of objectives, and this leads to a decrease in selection pressure and the deterioration of the evolutionary process. It follows then that the many-objective optimization problem (MaOP) has become one of the most challenging topics in the field of intelligent optimization. Recently, the multifactorial evolutionary algorithm (MFEA) and its variations, which have shown excellent performance in knowledge transfer across related problems, may provide a new and effective way for solving MaOPs. In this article, a novel MFEA based on improved dynamical decomposition (MFEA/IDD), which integrates the advantages of multitasking optimization and decomposition-based evolutionary algorithms, is proposed. Specifically, in the improved dynamical decomposition strategy (IDD) method, the bi-pivot strategy is designed to provide a good mechanism for balancing between convergence and diversity instead of the single-pivot strategy. Furthermore, a novel MFEA-based approach embedding the IDD strategy is developed to reduce the total running time for solving multiple MaOPs simultaneously. Compared with seven state-of-the-art algorithms, the efficacy of our proposed method is validated experimentally on the benchmarks WFG, DTLZ, and MAF with three to ten objectives, along with a series of real-world cases. The results reveal that the MFEA/IDD is well placed in balancing convergence and diversity while reducing the total number of function evaluations for solving MaOPs.

Evolutionary Many-Task Optimization Based on Multisource Knowledge Transfer

Multitask optimization aims to solve two or more optimization tasks simultaneously by leveraging intertask knowledge transfer. However, as the number of tasks increases to the extent of many-task optimization, the knowledge transfer between tasks encou…

Multitask optimization aims to solve two or more optimization tasks simultaneously by leveraging intertask knowledge transfer. However, as the number of tasks increases to the extent of many-task optimization, the knowledge transfer between tasks encounters more uncertainty and challenges, thereby resulting in degradation of optimization performance. To give full play to the many-task optimization framework and minimize the potential negative transfer, this article proposes an evolutionary many-task optimization algorithm based on a multisource knowledge transfer mechanism, namely, EMaTO-MKT. Particularly, in each iteration, EMaTO-MKT determines the probability of using knowledge transfer adaptively according to the evolution experience, and balances the self-evolution within each task and the knowledge transfer among tasks. To perform knowledge transfer, EMaTO-MKT selects multiple highly similar tasks in terms of maximum mean discrepancy as the learning sources for each task. Afterward, a knowledge transfer strategy based on local distribution estimation is applied to enable the learning from multiple sources. Compared with the other state-of-the-art evolutionary many-task algorithms on benchmark test suites, EMaTO-MKT shows competitiveness in solving many-task optimization problems.

Real-Time Federated Evolutionary Neural Architecture Search

Federated learning is a distributed machine learning approach to privacy preservation and two major technical challenges prevent a wider application of federated learning. One is that federated learning raises high demands on communication resources, s…

Federated learning is a distributed machine learning approach to privacy preservation and two major technical challenges prevent a wider application of federated learning. One is that federated learning raises high demands on communication resources, since a large number of model parameters must be transmitted between the server and clients. The other challenge is that training large machine learning models such as deep neural networks in federated learning requires a large amount of computational resources, which may be unrealistic for edge devices such as mobile phones. The problem becomes worse when deep neural architecture search (NAS) is to be carried out in federated learning. To address the above challenges, we propose an evolutionary approach to real-time federated NAS that not only optimizes the model performance but also reduces the local payload. During the search, a double-sampling technique is introduced, in which for each individual, only a randomly sampled submodel is transmitted to a number of randomly sampled clients for training. This way, we effectively reduce computational and communication costs required for evolutionary optimization, making the proposed framework well suitable for real-time federated NAS.

Hypervolume-Optimal <italic>μ</italic>-Distributions on Line/Plane-Based Pareto Fronts in Three Dimensions

Hypervolume is widely used in the evolutionary multiobjective optimization (EMO) field to evaluate the quality of a solution set. For a solution set with $mu $ solutions on a Pareto front, a larger hypervolume means a better solution set. Investigati…

Hypervolume is widely used in the evolutionary multiobjective optimization (EMO) field to evaluate the quality of a solution set. For a solution set with $mu $ solutions on a Pareto front, a larger hypervolume means a better solution set. Investigating the distribution of the solution set with the largest hypervolume is an important topic in EMO, which is the so-called hypervolume-optimal $mu $ -distribution. Theoretical results have shown that the $mu $ solutions are uniformly distributed on a linear Pareto front in two dimensions. However, the $mu $ solutions are not always uniformly distributed on a single-line Pareto front in three dimensions. They are only uniform when the single-line Pareto front has one constant objective. In this article, we further investigate the hypervolume-optimal $mu $ -distribution in three dimensions. We consider the line-based and plane-based Pareto fronts. For the line-based Pareto fronts, we extend the single-line Pareto front to two-line and three-line Pareto fronts, where each line has one constant objective. For the plane-based Pareto fronts, the linear triangular and inverted triangular Pareto fronts are considered. First, we show that the $mu $ solutions are not always uniformly distributed on the line-based Pareto fronts. The uniformity depends on how the lines are combined. Then, we show that a uniform solution set on the plane-based Pareto front is not always optimal for hypervolume maximization. It is locally optimal with respect to a $(mu +1)$-
selection scheme. Our results can help researchers in the community to better understand and utilize the hypervolume indicator.