Many-Objective Evolutionary Algorithms Based on Coordinated Selection Strategy

Selection strategy, including mating selection and environmental selection, is a key ingredient in the design of evolutionary multiobjective optimization algorithms. Existing approaches, which have shown competitive performance in low-dimensional multi…

Selection strategy, including mating selection and environmental selection, is a key ingredient in the design of evolutionary multiobjective optimization algorithms. Existing approaches, which have shown competitive performance in low-dimensional multiobjective optimization problems with two or three objectives, often encounter considerable challenges in many-objective optimization, where the number of objectives exceeds 3. This paper first provides a comprehensive analysis on the selection strategies in the current evolutionary many-objective optimization algorithms. Afterward, we propose a coordinated selection strategy to improve the performance of evolutionary algorithms in many-objective optimization. This selection strategy considers three crucial factors: 1) the new mating selection criterion considers both the quality of each selected parent and the effectiveness of the combination of selected parents; 2) the new environmental selection criterion directly focuses on the performance of the whole population rather than single individual alone; and 3) both selection steps are complement to each other and the coordination between them in the evolutionary process can achieve a better performance than each of them used individually. Furthermore, in order to handle the curse of dimensionality in many-objective optimization problems, a new convergence measure by distance and a new diversity measure by angle are developed in both selection steps. Experimental results on both DTLZ and WFG benchmark functions demonstrate the superiority of the proposed algorithm in comparison with six state-of-the-art designs in terms of both solution quality and computational efficiency.

A Novel Image Representation Framework Based on Gaussian Model and Evolutionary Optimization

We propose a novel image representation framework based on Gaussian model and evolutionary optimization (EO). In this framework, image patches are categorized into smooth and nonsmooth ones, and the two categories are treated distinctively. For a smoot…

We propose a novel image representation framework based on Gaussian model and evolutionary optimization (EO). In this framework, image patches are categorized into smooth and nonsmooth ones, and the two categories are treated distinctively. For a smooth patch, we formulate it as the summation of a direct component and a variation component (VC). We observe that the values of all VCs in an image can be well fitted by a Gaussian distribution, according to which we present an efficient reconstruction approach based on maximizing the logarithm a posteriori probability. For a nonsmooth patch, we introduce the mechanism of EO to solve a combinatorial optimization over a principal component analysis dictionary. In addition, we develop two approaches for estimating the coefficients of the atoms. Experiment results demonstrate that the proposed framework obtains the state-of-the-art results in several image inverse problems.

Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization

Particle swarm optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to…

Particle swarm optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

Adaptive Multimodal Continuous Ant Colony Optimization

Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization (ACO) algorithms in preserving high diversity, this paper intends to …

Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization (ACO) algorithms in preserving high diversity, this paper intends to extend ACO algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ACO algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima.

Factored Evolutionary Algorithms

Factored evolutionary algorithms (FEAs) are a new class of evolutionary search-based optimization algorithms that have successfully been applied to various problems, such as training neural networks and performing abductive inference in graphical model…

Factored evolutionary algorithms (FEAs) are a new class of evolutionary search-based optimization algorithms that have successfully been applied to various problems, such as training neural networks and performing abductive inference in graphical models. An FEA is unique in that it factors the objective function by creating overlapping subpopulations that optimize over a subset of variables of the function. In this paper, we give a formal definition of FEA algorithms and present empirical results related to their performance. One consideration in using an FEA is determining the appropriate factor architecture, which determines the set of variables each factor will optimize. For this reason, we present the results of experiments comparing the performance of different factor architectures on several standard applications for evolutionary algorithms. Additionally, we show that FEA’s performance is not restricted by the underlying optimization algorithm by creating FEA versions of hill climbing, particle swarm optimization, genetic algorithm, and differential evolution and comparing their performance to their single-population and cooperative coevolutionary counterparts.

Performance of Decomposition-Based Many-Objective Algorithms Strongly Depends on Pareto Front Shapes

Recently, a number of high performance many-objective evolutionary algorithms with systematically generated weight vectors have been proposed in the literature. Those algorithms often show surprisingly good performance on widely used DTLZ and WFG test problems. The performance of those algorithms has continued to be improved. The aim of this paper is to show our concern that such a performance improvement race may lead to the overspecialization of developed algorithms for the frequently used many-objective test problems. In this paper, we first explain the DTLZ and WFG test problems. Next, we explain many-objective evolutionary algorithms characterized by the use of systematically generated weight vectors. Then we discuss the relation between the features of the test problems and the search mechanisms of weight vector-based algorithms such as multiobjective evolutionary algorithm based on decomposition (MOEA/D), nondominated sorting genetic algorithm III (NSGA-III), MOEA/dominance and decomposition (MOEA/DD), and θ-dominance based evolutionary algorithm (θ-DEA). Through computational experiments, we demonstrate that a slight change in the problem formulations of DTLZ and WFG deteriorates the performance of those algorithms. After explaining the reason for the performance deterioration, we discuss the necessity of more general test problems and more flexible algorithms.

Recently, a number of high performance many-objective evolutionary algorithms with systematically generated weight vectors have been proposed in the literature. Those algorithms often show surprisingly good performance on widely used DTLZ and WFG test problems. The performance of those algorithms has continued to be improved. The aim of this paper is to show our concern that such a performance improvement race may lead to the overspecialization of developed algorithms for the frequently used many-objective test problems. In this paper, we first explain the DTLZ and WFG test problems. Next, we explain many-objective evolutionary algorithms characterized by the use of systematically generated weight vectors. Then we discuss the relation between the features of the test problems and the search mechanisms of weight vector-based algorithms such as multiobjective evolutionary algorithm based on decomposition (MOEA/D), nondominated sorting genetic algorithm III (NSGA-III), MOEA/dominance and decomposition (MOEA/DD), and θ-dominance based evolutionary algorithm (θ-DEA). Through computational experiments, we demonstrate that a slight change in the problem formulations of DTLZ and WFG deteriorates the performance of those algorithms. After explaining the reason for the performance deterioration, we discuss the necessity of more general test problems and more flexible algorithms.

Heterogeneous Cooperative Co-Evolution Memetic Differential Evolution Algorithm for Big Data Optimization Problems

Evolutionary algorithms (EAs) have recently been suggested as a candidate for solving big data optimization problems that involve a very large number of variables and need to be analyzed in a short period of time. However, EAs face a scalability issue …

Evolutionary algorithms (EAs) have recently been suggested as a candidate for solving big data optimization problems that involve a very large number of variables and need to be analyzed in a short period of time. However, EAs face a scalability issue when dealing with big data problems. Moreover, the performance of EAs critically hinges on the utilized parameter values and operator types, thus it is impossible to design a single EA that can outperform all others in every problem instance. To address these challenges, we propose a heterogeneous framework that integrates a cooperative co-evolution method with various types of memetic algorithms. We use the cooperative co-evolution method to split the big problem into subproblems in order to increase the efficiency of the solving process. The subproblems are then solved using various heterogeneous memetic algorithms. The proposed heterogeneous framework adaptively assigns, for each solution, different operators, parameter values and a local search algorithm to efficiently explore and exploit the search space of the given problem instance. The performance of the proposed algorithm is assessed using the Big Data 2015 competition benchmark problems that contain data with and without noise. Experimental results demonstrate that the proposed algorithm, with the cooperative co-evolution method, performs better than without the cooperative co-evolution method. Furthermore, it obtained very competitive results for all tested instances, if not better, when compared to other algorithms using lower computational times.

Table of contents

Presents the table of contents for this issue of the publication.

Presents the table of contents for this issue of the publication.