Factored Evolutionary Algorithms

Factored evolutionary algorithms (FEAs) are a new class of evolutionary search-based optimization algorithms that have successfully been applied to various problems, such as training neural networks and performing abductive inference in graphical models. An FEA is unique in that it factors the objective function by creating overlapping subpopulations that optimize over a subset of variables of the function. In this paper, we give a formal definition of FEA algorithms and present empirical results related to their performance. One consideration in using an FEA is determining the appropriate factor architecture, which determines the set of variables each factor will optimize. For this reason, we present the results of experiments comparing the performance of different factor architectures on several standard applications for evolutionary algorithms. Additionally, we show that FEA’s performance is not restricted by the underlying optimization algorithm by creating FEA versions of hill climbing, particle swarm optimization, genetic algorithm, and differential evolution and comparing their performance to their single-population and cooperative coevolutionary counterparts.

Factored evolutionary algorithms (FEAs) are a new class of evolutionary search-based optimization algorithms that have successfully been applied to various problems, such as training neural networks and performing abductive inference in graphical models. An FEA is unique in that it factors the objective function by creating overlapping subpopulations that optimize over a subset of variables of the function. In this paper, we give a formal definition of FEA algorithms and present empirical results related to their performance. One consideration in using an FEA is determining the appropriate factor architecture, which determines the set of variables each factor will optimize. For this reason, we present the results of experiments comparing the performance of different factor architectures on several standard applications for evolutionary algorithms. Additionally, we show that FEA’s performance is not restricted by the underlying optimization algorithm by creating FEA versions of hill climbing, particle swarm optimization, genetic algorithm, and differential evolution and comparing their performance to their single-population and cooperative coevolutionary counterparts.

Performance of Decomposition-Based Many-Objective Algorithms Strongly Depends on Pareto Front Shapes

Recently, a number of high performance many-objective evolutionary algorithms with systematically generated weight vectors have been proposed in the literature. Those algorithms often show surprisingly good performance on widely used DTLZ and WFG test …

Recently, a number of high performance many-objective evolutionary algorithms with systematically generated weight vectors have been proposed in the literature. Those algorithms often show surprisingly good performance on widely used DTLZ and WFG test problems. The performance of those algorithms has continued to be improved. The aim of this paper is to show our concern that such a performance improvement race may lead to the overspecialization of developed algorithms for the frequently used many-objective test problems. In this paper, we first explain the DTLZ and WFG test problems. Next, we explain many-objective evolutionary algorithms characterized by the use of systematically generated weight vectors. Then we discuss the relation between the features of the test problems and the search mechanisms of weight vector-based algorithms such as multiobjective evolutionary algorithm based on decomposition (MOEA/D), nondominated sorting genetic algorithm III (NSGA-III), MOEA/dominance and decomposition (MOEA/DD), and $theta $ -dominance based evolutionary algorithm ( $theta$ -DEA). Through computational experiments, we demonstrate that a slight change in the problem formulations of DTLZ and WFG deteriorates the performance of those algorithms. After explaining the reason for the performance deterioration, we discuss the necessity of more general test problems and more flexible algorithms.

Heterogeneous Cooperative Co-Evolution Memetic Differential Evolution Algorithm for Big Data Optimization Problems

Evolutionary algorithms (EAs) have recently been suggested as a candidate for solving big data optimization problems that involve a very large number of variables and need to be analyzed in a short period of time. However, EAs face a scalability issue …

Evolutionary algorithms (EAs) have recently been suggested as a candidate for solving big data optimization problems that involve a very large number of variables and need to be analyzed in a short period of time. However, EAs face a scalability issue when dealing with big data problems. Moreover, the performance of EAs critically hinges on the utilized parameter values and operator types, thus it is impossible to design a single EA that can outperform all others in every problem instance. To address these challenges, we propose a heterogeneous framework that integrates a cooperative co-evolution method with various types of memetic algorithms. We use the cooperative co-evolution method to split the big problem into subproblems in order to increase the efficiency of the solving process. The subproblems are then solved using various heterogeneous memetic algorithms. The proposed heterogeneous framework adaptively assigns, for each solution, different operators, parameter values and a local search algorithm to efficiently explore and exploit the search space of the given problem instance. The performance of the proposed algorithm is assessed using the Big Data 2015 competition benchmark problems that contain data with and without noise. Experimental results demonstrate that the proposed algorithm, with the cooperative co-evolution method, performs better than without the cooperative co-evolution method. Furthermore, it obtained very competitive results for all tested instances, if not better, when compared to other algorithms using lower computational times.

Table of contents

Presents the table of contents for this issue of the publication.

Presents the table of contents for this issue of the publication.

EvoStar panel on open-access publishing

Of possible interest to GPEM-affiliated folks who will be attending EvoStar: Jacqueline Heinerman has organized a lunchtime panel discussion session on Wednesday, on the topic of open access publishing, which will be sponsored by NWO, Netherlands Orga…

Of possible interest to GPEM-affiliated folks who will be attending EvoStar:

Jacqueline Heinerman has organized a lunchtime panel discussion session on Wednesday, on the topic of open access publishing, which will be sponsored by NWO, Netherlands Organisation for Scientific Research. Panel speakers will include Ronan Nugent from Springer, Emma Hart (editor of ECJ), and a representative of the VU university library.

h/t Jennifer Willies and James McDermott for the notice.

EvoStar panel on open-access publishing

Of possible interest to GPEM-affiliated folks who will be attending EvoStar: Jacqueline Heinerman has organized a lunchtime panel discussion session on Wednesday, on the topic of open access publishing, which will be sponsored by NWO, Netherlands Orga…

Of possible interest to GPEM-affiliated folks who will be attending EvoStar:

Jacqueline Heinerman has organized a lunchtime panel discussion session on Wednesday, on the topic of open access publishing, which will be sponsored by NWO, Netherlands Organisation for Scientific Research. Panel speakers will include Ronan Nugent from Springer, Emma Hart (editor of ECJ), and a representative of the VU university library.

h/t Jennifer Willies and James McDermott for the notice.

GPEM 18(1) is available

The first issue of Volume 18 of Genetic Programming and Evolvable Machines is now available for download. This is a special issue on Genetic Improvement, edited by Justyna Petke, and it also contains three book reviews. The complete contents are: “Edit…

The first issue of Volume 18 of Genetic Programming and Evolvable Machines is now available for download.

This is a special issue on Genetic Improvement, edited by Justyna Petke, and it also contains three book reviews.

The complete contents are:

“Editorial introduction”
by Lee Spector Pages 1-2

“Preface to the Special Issue on Genetic Improvement”
by Justyna Petke

“Genetic improvement of GPU software”
by William B. Langdon, Brian Yee Hong Lam, Marc Modat, Justyna Petke, and Mark Harman

“Trading between quality and non-functional properties of median filter in embedded systems”
by Zdenek Vasicek and Vojtech Mrazek

“Online Genetic Improvement on the java virtual machine with ECSELR”
by Kwaku Yeboah-Antwi and Benoit Baudry

BOOK REVIEW
“Krzysztof Krawiec: Behavioral program synthesis with genetic programming” by Raja Muhammad Atif Azad

BOOK REVIEW
“Paul Rendell: Turing machine universality of the Game of Life”
by Moshe Sipper

BOOK REVIEW
“James Keller, Derong Liu, and David Fogel: Fundamentals of computational intelligence: neural networks, fuzzy systems, and evolutionary computation” by Steven Michael Corns

“Acknowledgment to Reviewers”
by L. Spector