A Many-Objective Evolutionary Algorithm With Pareto-Adaptive Reference Points

We propose a new many-objective evolutionary algorithm with Pareto-adaptive reference points. In this algorithm, the shape of the Pareto-optimal front (PF) is estimated based on a ratio of Euclidean distances. If the estimated shape is likely to be con…

We propose a new many-objective evolutionary algorithm with Pareto-adaptive reference points. In this algorithm, the shape of the Pareto-optimal front (PF) is estimated based on a ratio of Euclidean distances. If the estimated shape is likely to be convex, the nadir point is used as the reference point to calculate the convergence and diversity indicators for individuals. Otherwise, the reference point is set to the ideal point. In addition, the estimation of the nadir point is different from what was widely used in the literature. The nadir point, together with the ideal point, provides a feasible way to deal with dominance resistant solutions, which are difficult to be detected and eliminated in Pareto-based algorithms. The proposed algorithm is compared with the state-of-the-art many-objective optimization algorithms on a number of unconstrained and constrained test problems with up to 15 objectives. The experimental results show that it performs better than other algorithms in most of the test instances. Moreover, the new algorithm shows good performance on problems whose PFs are irregular (being discontinuous, degenerated, bent, or mixed). The observed high performance and inherent good properties (such as being free of weight vectors and control parameters) make the new proposal a promising tool for other similar problems.

A Review of Evolutionary Multimodal Multiobjective Optimization

Multimodal multiobjective optimization aims to find all Pareto optimal solutions, including overlapping solutions in the objective space. Multimodal multiobjective optimization has been investigated in the evolutionary computation community since 2005. However, it is difficult to survey existing studies in this field because they have been independently conducted and do not explicitly use the term “multimodal multiobjective optimization.” To address this issue, this letter reviews the existing studies of evolutionary multimodal multiobjective optimization, including studies published under names that are different from multimodal multiobjective optimization. Our review also clarifies open issues in this research area.

Multimodal multiobjective optimization aims to find all Pareto optimal solutions, including overlapping solutions in the objective space. Multimodal multiobjective optimization has been investigated in the evolutionary computation community since 2005. However, it is difficult to survey existing studies in this field because they have been independently conducted and do not explicitly use the term “multimodal multiobjective optimization.” To address this issue, this letter reviews the existing studies of evolutionary multimodal multiobjective optimization, including studies published under names that are different from multimodal multiobjective optimization. Our review also clarifies open issues in this research area.

Toward a Matrix-Free Covariance Matrix Adaptation Evolution Strategy

In this paper, we discuss a method for generating new individuals such that their mean vector and the covariance matrix are defined by formulas analogous to the covariance matrix adaptation evolution strategy (CMA-ES). In contrast to CMA-ES, which generates new individuals using multivariate Gaussian distribution with an explicitly defined covariance matrix, the introduced method uses combinations of difference vectors between archived individuals and univariate Gaussian random vectors along directions of past shifts of the population midpoints. We use this method to formulate the differential evolution strategy (DES)—an algorithm that is a crossover between differential evolution (DE) and CMA-ES. The numerical results presented in this paper indicate that DES is competitive against CMA-ES in performing both local and global optimization.

In this paper, we discuss a method for generating new individuals such that their mean vector and the covariance matrix are defined by formulas analogous to the covariance matrix adaptation evolution strategy (CMA-ES). In contrast to CMA-ES, which generates new individuals using multivariate Gaussian distribution with an explicitly defined covariance matrix, the introduced method uses combinations of difference vectors between archived individuals and univariate Gaussian random vectors along directions of past shifts of the population midpoints. We use this method to formulate the differential evolution strategy (DES)—an algorithm that is a crossover between differential evolution (DE) and CMA-ES. The numerical results presented in this paper indicate that DES is competitive against CMA-ES in performing both local and global optimization.

Multiple Reference Points-Based Decomposition for Multiobjective Feature Selection in Classification: Static and Dynamic Mechanisms

Feature selection is an important task in machine learning that has two main objectives: 1) reducing dimensionality and 2) improving learning performance. Feature selection can be considered a multiobjective problem. However, it has its problematic cha…

Feature selection is an important task in machine learning that has two main objectives: 1) reducing dimensionality and 2) improving learning performance. Feature selection can be considered a multiobjective problem. However, it has its problematic characteristics, such as a highly discontinuous Pareto front, imbalance preferences, and partially conflicting objectives. These characteristics are not easy for existing evolutionary multiobjective optimization (EMO) algorithms. We propose a new decomposition approach with two mechanisms (static and dynamic) based on multiple reference points under the multiobjective evolutionary algorithm based on decomposition (MOEA/D) framework to address the above-mentioned difficulties of feature selection. The static mechanism alleviates the dependence of the decomposition on the Pareto front shape and the effect of the discontinuity. The dynamic one is able to detect regions in which the objectives are mostly conflicting, and allocates more computational resources to the detected regions. In comparison with other EMO algorithms on 12 different classification datasets, the proposed decomposition approach finds more diverse feature subsets with better performance in terms of hypervolume and inverted generational distance. The dynamic mechanism successfully identifies conflicting regions and further improves the approximation quality for the Pareto fronts.

Multifactorial Evolutionary Algorithm With Online Transfer Parameter Estimation: MFEA-II

Humans rarely tackle every problem from scratch. Given this observation, the motivation for this paper is to improve optimization performance through adaptive knowledge transfer across related problems. The scope for spontaneous transfers under the sim…

Humans rarely tackle every problem from scratch. Given this observation, the motivation for this paper is to improve optimization performance through adaptive knowledge transfer across related problems. The scope for spontaneous transfers under the simultaneous occurrence of multiple problems unveils the benefits of multitasking. Multitask optimization has recently demonstrated competence in solving multiple (related) optimization tasks concurrently. Notably, in the presence of underlying relationships between problems, the transfer of high-quality solutions across them has shown to facilitate superior performance characteristics. However, in the absence of any prior knowledge about the intertask synergies (as is often the case with general black-box optimization), the threat of predominantly negative transfer prevails. Susceptibility to negative intertask interactions can impede the overall convergence behavior. To allay such fears, in this paper, we propose a novel evolutionary computation framework that enables online learning and exploitation of the similarities (and discrepancies) between distinct tasks in multitask settings, for an enhanced optimization process. Our proposal is based on the principled theoretical arguments that seek to minimize the tendency of harmful interactions between tasks, based on a purely data-driven learning of relationships among them. The efficacy of our proposed method is validated experimentally on a series of synthetic benchmarks, as well as a practical study that provides insights into the behavior of the method in the face of several tasks occurring at once.

A Similarity-Based Cooperative Co-Evolutionary Algorithm for Dynamic Interval Multiobjective Optimization Problems

Dynamic interval multiobjective optimization problems (DI-MOPs) are very common in real-world applications. However, there are few evolutionary algorithms (EAs) that are suitable for tackling DI-MOPs up to date. A framework of dynamic interval multiobj…

Dynamic interval multiobjective optimization problems (DI-MOPs) are very common in real-world applications. However, there are few evolutionary algorithms (EAs) that are suitable for tackling DI-MOPs up to date. A framework of dynamic interval multiobjective cooperative co-evolutionary optimization based on the interval similarity is presented in this paper to handle DI-MOPs. In the framework, a strategy for decomposing decision variables is first proposed, through which all the decision variables are divided into two groups according to the interval similarity between each decision variable and interval parameters. Following that, two subpopulations are utilized to cooperatively optimize decision variables in the two groups. Furthermore, two response strategies, i.e., a strategy based on the change intensity and a random mutation strategy, are employed to rapidly track the changing Pareto front of the optimization problem. The proposed algorithm is applied to eight benchmark optimization instances as well as a multiperiod portfolio selection problem and compared with five state-of-the-art EAs. The experimental results reveal that the proposed algorithm is very competitive on most optimization instances.

A Theoretical Guideline for Designing an Effective Adaptive Particle Swarm

In this paper, the underlying assumptions that have been used for designing adaptive particle swarm optimization (PSO) algorithms in the past years are theoretically investigated. I relate these assumptions to the movement patterns of particles control…

In this paper, the underlying assumptions that have been used for designing adaptive particle swarm optimization (PSO) algorithms in the past years are theoretically investigated. I relate these assumptions to the movement patterns of particles controlled by coefficient values (inertia weight and acceleration coefficients) and introduce three factors, namely the autocorrelation of the particle positions, the average movement distance of the particle in each iteration, and the focus of the search, that describe these movement patterns. I show how these factors represent movement patterns of a particle within a swarm and how they are affected by particle coefficients (i.e., inertia weight and acceleration coefficients). I derive equations that provide exact coefficient values to guarantee to achieve the desired movement pattern defined by these three factors within a swarm. I then relate these movements to the searching capability of particles and provide a guideline for designing potentially successful adaptive methods to control coefficients in particle swarm. Finally, I propose a new simple time adaptive particle swarm and compare its results with previous adaptive particle swarm approaches. Experiments show that the theoretical findings indeed provide a beneficial guideline for the successful adaptation of the coefficients in the PSO algorithm.

Automatic Niching Differential Evolution With Contour Prediction Approach for Multimodal Optimization Problems

Niching techniques have been widely incorporated into evolutionary algorithms (EAs) for solving multimodal optimization problems (MMOPs). However, most of the existing niching techniques are either sensitive to the niching parameters or require extra f…

Niching techniques have been widely incorporated into evolutionary algorithms (EAs) for solving multimodal optimization problems (MMOPs). However, most of the existing niching techniques are either sensitive to the niching parameters or require extra fitness evaluations (FEs) to maintain the niche detection accuracy. In this paper, we propose a new automatic niching technique based on the affinity propagation clustering (APC) and design a novel niching differential evolution (DE) algorithm, termed as automatic niching DE (ANDE), for solving MMOPs. In the proposed ANDE algorithm, APC acts as a parameter-free automatic niching method that does not need to predefine the number of clusters or the cluster size. Also, it can facilitate locating multiple peaks without extra FEs. Furthermore, the ANDE algorithm is enhanced by a contour prediction approach (CPA) and a two-level local search (TLLS) strategy. First, the CPA is a predictive search strategy. It exploits the individual distribution information in each niche to estimate the contour landscape, and then predicts the rough position of the potential peak to help accelerate the convergence speed. Second, the TLLS is a solution refine strategy to further increase the solution accuracy after the CPA roughly predicting the peaks. Compared with the other state-of-the-art DE and non-DE multimodal algorithms, even the winner of competition on multimodal optimization, the experimental results on 20 widely used benchmark functions illustrate the superiority of the proposed ANDE algorithm.

Tackling Large-Scale and Combinatorial Bi-Level Problems With a Genetic Programming Hyper-Heuristic

Combinatorial bi-level optimization remains a challenging topic, especially when the lower-level is an ${mathcal {NP}}$ -hard problem. In this paper, we tackle large-scale and combinatorial bi-level problems using GP hyper-heuristics, i.e., an approach that permits to train heuristics like a machine learning model. Our contribution aims at targeting the intensive and complex lower-level optimizations that occur when solving a large-scale and combinatorial bi-level problem. For this purpose, we consider hyper-heuristics through heuristic generation. Using a GP hyper-heuristic approach, we train greedy heuristics in order to make them more reliable when encountering unseen lower-level instances that could be generated during bi-level optimization. To validate our approach referred to as GA+AGH, we tackle instances from the bi-level cloud pricing optimization problem (BCPOP) that model the trading interactions between a cloud service provider and cloud service customers. Numerical results demonstrate the abilities of the trained heuristics to cope with the inherent nested structure that makes bi-level optimization problems so hard. Furthermore, it has been shown that training heuristics for lower-level optimization permits to outperform human-based heuristics and metaheuristics which constitute an excellent outcome for bi-level optimization.

Combinatorial bi-level optimization remains a challenging topic, especially when the lower-level is an ${mathcal {NP}}$ -hard problem. In this paper, we tackle large-scale and combinatorial bi-level problems using GP hyper-heuristics, i.e., an approach that permits to train heuristics like a machine learning model. Our contribution aims at targeting the intensive and complex lower-level optimizations that occur when solving a large-scale and combinatorial bi-level problem. For this purpose, we consider hyper-heuristics through heuristic generation. Using a GP hyper-heuristic approach, we train greedy heuristics in order to make them more reliable when encountering unseen lower-level instances that could be generated during bi-level optimization. To validate our approach referred to as GA+AGH, we tackle instances from the bi-level cloud pricing optimization problem (BCPOP) that model the trading interactions between a cloud service provider and cloud service customers. Numerical results demonstrate the abilities of the trained heuristics to cope with the inherent nested structure that makes bi-level optimization problems so hard. Furthermore, it has been shown that training heuristics for lower-level optimization permits to outperform human-based heuristics and metaheuristics which constitute an excellent outcome for bi-level optimization.