New Sampling Strategies When Searching for Robust Solutions

Many real-world optimization problems involve uncertainties, and in such situations it is often desirable to identify robust solutions that perform well over the possible future scenarios. In this paper, we focus on input uncertainty, such as in manufacturing, where the actual manufactured product may differ from the specified design but should still function well. Estimating a solution’s expected fitness in such a case is challenging, especially if the fitness function is expensive to evaluate, and its analytic form is unknown. One option is to average over a number of scenarios, but this is computationally expensive. The archive sample approximation method reduces the required number of fitness evaluations by reusing previous evaluations stored in an archive. The main challenge in the application of this method lies in determining the locations of additional samples drawn in each generation to enrich the information in the archive and reduce the estimation error. In this paper, we use the Wasserstein distance metric to approximate the possible benefit of a potential sample location on the estimation error, and propose new sampling strategies based on this metric. Contrary to previous studies, we consider a sample’s contribution for the entire population, rather than inspecting each individual separately. This also allows us to dynamically adjust the number of samples to be collected in each generation. An empirical comparison with several previously proposed archive-based sample approximation methods demonstrates the superiority of our approaches.

Many real-world optimization problems involve uncertainties, and in such situations it is often desirable to identify robust solutions that perform well over the possible future scenarios. In this paper, we focus on input uncertainty, such as in manufacturing, where the actual manufactured product may differ from the specified design but should still function well. Estimating a solution’s expected fitness in such a case is challenging, especially if the fitness function is expensive to evaluate, and its analytic form is unknown. One option is to average over a number of scenarios, but this is computationally expensive. The archive sample approximation method reduces the required number of fitness evaluations by reusing previous evaluations stored in an archive. The main challenge in the application of this method lies in determining the locations of additional samples drawn in each generation to enrich the information in the archive and reduce the estimation error. In this paper, we use the Wasserstein distance metric to approximate the possible benefit of a potential sample location on the estimation error, and propose new sampling strategies based on this metric. Contrary to previous studies, we consider a sample’s contribution for the entire population, rather than inspecting each individual separately. This also allows us to dynamically adjust the number of samples to be collected in each generation. An empirical comparison with several previously proposed archive-based sample approximation methods demonstrates the superiority of our approaches.

Evolutionary Bilevel Optimization Based on Covariance Matrix Adaptation

Bilevel optimization refers to a challenging optimization problem which contains two levels of optimization problems. The task of bilevel optimization is to find the optimum of the upper-level problem, subject to the optimality of the corresponding low…

Bilevel optimization refers to a challenging optimization problem which contains two levels of optimization problems. The task of bilevel optimization is to find the optimum of the upper-level problem, subject to the optimality of the corresponding lower-level problem. This nested nature introduces many difficulties such as nonconvexity and disconnectedness, and poses great challenges to traditional optimization methods. Using evolutionary algorithms in bilevel optimization has been demonstrated to be very promising in recent years. However, these algorithms suffer from low efficiency since they usually require a huge number of function evaluations. This paper proposes a bilevel covariance matrix adaptation evolution strategy to handle bilevel optimization problems. A search distribution sharing mechanism is designed so that we can extract a priori knowledge of the lower-level problem from the upper-level optimizer, which significantly reduces the number of function evaluations. We also propose a refinement-based elite preservation mechanism to trace the elite and avoid inaccurate solutions. Comparisons with five state-of-the-art algorithms on 22 benchmark problems and two real-world applications are carried out to test the performance of the proposed approach. The experimental results have shown the effectiveness of the proposed approach in keeping a good tradeoff between solution quality and computational efficiency.

Large Scale Black-Box Optimization by Limited-Memory Matrix Adaptation

The covariance matrix adaptation evolution strategy (CMA-ES) is a popular method to deal with nonconvex and/or stochastic optimization problems when gradient information is not available. Being based on the CMA-ES, the recently proposed matrix adaptati…

The covariance matrix adaptation evolution strategy (CMA-ES) is a popular method to deal with nonconvex and/or stochastic optimization problems when gradient information is not available. Being based on the CMA-ES, the recently proposed matrix adaptation evolution strategy (MA-ES) establishes the rather surprising result that the covariance matrix and all associated operations (e.g., potentially unstable eigen decomposition) can be replaced by an iteratively updated transformation matrix without any loss of performance. In order to further simplify MA-ES and reduce its $boldsymbol {mathcal {O}}({n}^{ { {2}}})$ time and storage complexity to $boldsymbol {mathcal {O}}({{mn}})$ with $boldsymbol {m}~{boldsymbol ll }~{n}$ such as $boldsymbol {m}~{boldsymbol in }~boldsymbol {mathcal {O}}{(1)}$ or ${m} {in } boldsymbol {mathcal {O}}({log {({n})}})$ , we present the limited-memory MA-ES for efficient zeroth order large-scale optimization. The algorithm demonstrates state-of-the-art performance on a set of established large-scale benchmarks.

Adaptive Sorting-Based Evolutionary Algorithm for Many-Objective Optimization

Evolutionary algorithms have shown their promise in coping with many-objective optimization problems. However, the strategies of balancing convergence and diversity and the effectiveness of handling problems with irregular Pareto fronts (PFs) are still…

Evolutionary algorithms have shown their promise in coping with many-objective optimization problems. However, the strategies of balancing convergence and diversity and the effectiveness of handling problems with irregular Pareto fronts (PFs) are still far from perfect. To address these issues, this paper proposes an adaptive sorting-based evolutionary algorithm based on the idea of decomposition. First, we propose an adaptive sorting-based environmental selection strategy. Solutions in each subpopulation (partitioned by reference vectors) are sorted based on their convergence. Those with better convergence are further sorted based on their diversity, then being selected according to their sorting levels. Second, we provide an adaptive promising subpopulation sorting-based environmental selection strategy for problems which may have irregular PFs. This strategy provides additional sorting-based selection effort on promising subpopulations after the general environmental selection process. Third, we extend the algorithm to handle constraints. Finally, we conduct an extensive experimental study on the proposed algorithm by comparing with start-of-the-state algorithms. Results demonstrate the superiority of the proposed algorithm.

A Strengthened Dominance Relation Considering Convergence and Diversity for Evolutionary Many-Objective Optimization

Both convergence and diversity are crucial to evolutionary many-objective optimization, whereas most existing dominance relations show poor performance in balancing them, thus easily leading to a set of solutions concentrating on a small region of the …

Both convergence and diversity are crucial to evolutionary many-objective optimization, whereas most existing dominance relations show poor performance in balancing them, thus easily leading to a set of solutions concentrating on a small region of the Pareto fronts. In this paper, a novel dominance relation is proposed to better balance convergence and diversity for evolutionary many-objective optimization. In the proposed dominance relation, an adaptive niching technique is developed based on the angles between the candidate solutions, where only the best converged candidate solution is identified to be nondominated in each niche. Experimental results demonstrate that the proposed dominance relation outperforms existing dominance relations in balancing convergence and diversity. A modified NSGA-II is suggested based on the proposed dominance relation, which shows competitiveness against the state-of-the-art algorithms in solving many-objective optimization problems. The effectiveness of the proposed dominance relation is also verified on several other existing multi- and many-objective evolutionary algorithms.

Low-Dimensional Euclidean Embedding for Visualization of Search Spaces in Combinatorial Optimization

This paper proposes a method for visualizing combinatorial search spaces named low-dimensional Euclidean embedding (LDEE). The proposed method transforms the original search space, such as a set of permutations or binary vectors, to $pmb {mathbb {R}}^…

This paper proposes a method for visualizing combinatorial search spaces named low-dimensional Euclidean embedding (LDEE). The proposed method transforms the original search space, such as a set of permutations or binary vectors, to $pmb {mathbb {R}}^{k}$ (with ${k} {=} {2}$ or 3 in practice) while aiming to preserve spatial relationships existing in the original space. The LDEE method uses the t-distributed stochastic neighbor embedding (t-SNE) to transform solutions from the original search space to the Euclidean one. In this paper, it is mathematically shown that the assumptions underlying the t-SNE method are valid in the case of permutation spaces with the Mallows distribution. The same is true for other metric spaces provided that the distribution of points is assumed to be normal with respect to the adopted metric. The embedding obtained using t-SNE is further refined to ensure visual separation of individual solutions. The visualization obtained using the LDEE method can be used for analyzing the behavior of the population in a population-based metaheuristic, the working of the genetic operators, etc. Examples of visualizations obtained using this method for the four peaks problem, the firefighter problem, the knapsack problem, the quadratic assignment problem, and the traveling salesman problem are presented in this paper.