IEEE Transactions on Evolutionary Computation information for authors

Comments Off

Automated Map Generation for the Physical Traveling Salesman Problem

This paper presents a method for generating complex problems that allow multiple nonobvious solutions for the physical traveling salesman problem (PTSP). PTSP is a single-player game adaptation of the classical traveling salesman problem that makes use of a simple physics model: the player has to visit a number of waypoints as quickly as possible by navigating a ship in real time across an obstacle-filled 2-D map. The difficulty of this game depends on the distribution of waypoints and obstacles across the 2-D plane. Due to the physics of the game, the shortest route is not necessarily the fastest, as the ship’s momentum makes it difficult to turn sharply at high speed. This paper proposes an evolutionary approach to obtaining maps where the optimal solution is not immediately obvious. In particular, any optimal route for these maps should differ distinctively from: 1) the optimal distance-based TSP route and 2) the route that corresponds to always approaching the nearest waypoint first. To achieve this, the evolutionary algorithm covariance matrix adaptation-evolutionary strategy (CMA-ES) is employed, where maps, indirectly represented as vectors of real numbers, are evolved to differentiate maximally between a game-playing agent that follows two or more different routes. The results presented in this paper show that CMA-ES is able to generate maps that fulfil the desired conditions.

Comments Off

Open Access

Comments Off

Differential Evolution With Dynamic Parameters Selection for Optimization Problems

Over the last few decades, a number of differential evolution (DE) algorithms have been proposed with excellent performance on mathematical benchmarks. However, like any other optimization algorithm, the success of DE is highly dependent on the search operators and control parameters that are often decided a priori. The selection of the parameter values is itself a combinatorial optimization problem. Although a considerable number of investigations have been conducted with regards to parameter selection, it is known to be a tedious task.

Comments Off

Genetic Algorithms for Evolving Computer Chess Programs

This paper demonstrates the use of genetic algorithms for evolving: 1) a grandmaster-level evaluation function, and 2) a search mechanism for a chess program, the parameter values of which are initialized randomly. The evaluation function of the program is evolved by learning from databases of (human) grandmaster games. At first, the organisms are evolved to mimic the behavior of human grandmasters, and then these organisms are further improved upon by means of coevolution. The search mechanism is evolved by learning from tactical test suites. Our results show that the evolved program outperforms a two-time world computer chess champion and is at par with the other leading computer chess programs.

Comments Off

Reevaluating Immune-Inspired Hypermutations Using the Fixed Budget Perspective

Different studies have theoretically analyzed the performance of artificial immune systems in the context of optimization. It has been noted that, in comparison with evolutionary algorithms and local search, hypermutations tend to be inferior on typical example functions. These studies have used the expected optimization time as performance criterion and cannot explain why artificial immune systems are popular in spite of these proven drawbacks. Recently, a different perspective for theoretical analysis has been introduced, concentrating on the expected performance within a fixed time frame instead of the expected time needed for optimization. Using this perspective we reevaluate the performance of somatic contiguous hypermutations and inverse fitness-proportional hypermutations in comparison with random local search on one well-known example function in which a random local search is known to be efficient and much more efficient than these hypermutations with respect to the expected optimization time. We prove that, depending on the choice of the initial search point, hypermutations can by far outperform random local search in a given time frame. This insight helps to explain the success of seemingly inefficient mutation operators in practice. Moreover, we demonstrate how one can benefit from these theoretically obtained insights by designing more efficient hybrid search heuristics.

Comments Off

A Knowledge-Based Evolutionary Multiobjective Approach for Stochastic Extended Resource Investment Project Scheduling Problems

Planning problems, such as mission capability planning in defense, can traditionally be modeled as a resource investment project scheduling problem (RIPSP) with unconstrained resources and cost. This formulation is too abstract in some real-world applications. In these applications, the durations of tasks depend on the allocated resources. In this paper, we first propose a new version of RIPSPs, namely extended RIPSPs (ERIPSPs), in which the durations of tasks are a function of allocated resources. Moreover, we introduce a resource proportion coefficient to manifest the contribution degree of various resources to activities. Since the more realistic nature of projects in practice implies that the circumstances under which the plan will be executed are stochastic in nature, we present a stochastic version of ERIPSPs, namely stochastic extended RIPSPs (SERIPSPs). To solve SERIPSPs, we first use scenarios to capture the space of possibilities (i.e., stochastic elements of the problem). We focus on three sources of uncertainty: duration perturbation, resource breakdown, and precedence alteration. We propose a robustness measure for the solutions of SEPIPSPs when uncertainties interact. We then formulate an SERIPSP as a multiobjective optimization model with three optimization objectives: makespan, cost, and robustness. A knowledge-based multiobjective evolutionary algorithm (K-MOEA) is proposed to solve the problem. The mechanism of K-MOEA is simple and time efficient. The algorithm has two main characteristics. The first is that useful information (knowledge) contained in the obtained approximated nondominated solutions is extracted during the evolutionary process. The second is that extracted knowledge is utilized by updating the population periodically to guide subsequent search. The approach is illustrated using a synthetic case study. Randomly generated benchmark instances are used to analyze the performance of the proposed K-MOEA. The experimental results illustr-
te the effectiveness of the proposed algorithm and its potential for solving SERIPSPs.

Comments Off

Asymptotic Properties of a Generalized Cross-Entropy Optimization Algorithm

The discrete cross-entropy optimization algorithm iteratively samples solutions according to a probability density on the solution space. The density is adapted to the good solutions observed in the present sample before producing the next sample. The adaptation is controlled by a so-called smoothing parameter. We generalize this model by introducing a flexible concept of feasibility and desirability into the sampling process. In this way, our model covers several other optimization procedures, in particular the ant-based algorithms. The focus of this paper is on some theoretical properties of these algorithms. We examine the first hitting time ( tau ) of an optimal solution and give conditions on the smoothing parameter for ( tau ) to be finite with probability one. For a simple test case we show that runtime can be polynomially bounded in the problem size with a probability converging to 1. We then investigate the convergence of the underlying density and of the sampling process. We show, in particular, that a constant smoothing parameter, as it is often used, makes the sample process converge in finite time, freezing the optimization at a single solution that need not be optimal. Moreover, we define a smoothing sequence that makes the density converge without freezing the sample process and that still guarantees the reachability of optimal solutions in finite time. This settles an open question from the literature.

Comments Off

Table of contents

Comments Off

Convergence of Hypervolume-Based Archiving Algorithms

Multiobjective evolutionary algorithms typically maintain a set of solutions. A crucial part of these algorithms is the archiving, which decides what solutions to keep. A (( boldsymbol {mu +lambda })) -archiving algorithm defines how to choose in each generation ( boldsymbol {mu }) children from ( boldsymbol {mu }) parents and ( boldsymbol {lambda }) offspring together. We study mathematically the convergence behavior of hypervolume-based archiving algorithms. We distinguish two cases for the offspring generation. A best-case view leads to a study of the effectiveness of archiving algorithms. It was known that all (( boldsymbol {mu +1})) -archiving algorithms are ineffective, which means that a set with maximum hypervolume is not necessarily reached. We prove that for ( boldsymbol {lambda <mu }) , all archiving algorithms are ineffective. We also present upper and lower bounds for the achievable hypervolume for different classes of archiving algorithms. On the other hand, a worst-case view on the offspring generation leads to a study of the competitive ratio of archiving algorithms. This measures how much smaller hypervolumes are achieved due to not knowing the future offspring in advance. We present upper and lower bounds on the competitive ratio of different archiving algorithms and present an archiving algorithm, which is the first known computationally efficient archiving algorithm with constant competitive ratio.

Comments Off