GPEM 18(1) is available

The first issue of Volume 18 of Genetic Programming and Evolvable Machines is now available for download. This is a special issue on Genetic Improvement, edited by Justyna Petke, and it also contains three book reviews. The complete contents are: “Edit…

The first issue of Volume 18 of Genetic Programming and Evolvable Machines is now available for download.

This is a special issue on Genetic Improvement, edited by Justyna Petke, and it also contains three book reviews.

The complete contents are:

“Editorial introduction”
by Lee Spector Pages 1-2

“Preface to the Special Issue on Genetic Improvement”
by Justyna Petke

“Genetic improvement of GPU software”
by William B. Langdon, Brian Yee Hong Lam, Marc Modat, Justyna Petke, and Mark Harman

“Trading between quality and non-functional properties of median filter in embedded systems”
by Zdenek Vasicek and Vojtech Mrazek

“Online Genetic Improvement on the java virtual machine with ECSELR”
by Kwaku Yeboah-Antwi and Benoit Baudry

BOOK REVIEW
“Krzysztof Krawiec: Behavioral program synthesis with genetic programming” by Raja Muhammad Atif Azad

BOOK REVIEW
“Paul Rendell: Turing machine universality of the Game of Life”
by Moshe Sipper

BOOK REVIEW
“James Keller, Derong Liu, and David Fogel: Fundamentals of computational intelligence: neural networks, fuzzy systems, and evolutionary computation” by Steven Michael Corns

“Acknowledgment to Reviewers”
by L. Spector

Automatically Evolving Rotation-Invariant Texture Image Descriptors by Genetic Programming

In computer vision, training a model that performs classification effectively is highly dependent on the extracted features, and the number of training instances. Conventionally, feature detection and extraction are performed by a domain expert who, in…

In computer vision, training a model that performs classification effectively is highly dependent on the extracted features, and the number of training instances. Conventionally, feature detection and extraction are performed by a domain expert who, in many cases, is expensive to employ and hard to find. Therefore, image descriptors have emerged to automate these tasks. However, designing an image descriptor still requires domain-expert intervention. Moreover, the majority of machine learning algorithms require a large number of training examples to perform well. However, labeled data is not always available or easy to acquire, and dealing with a large dataset can dramatically slow down the training process. In this paper, we propose a novel genetic programming-based method that automatically synthesises a descriptor using only two training instances per class. The proposed method combines arithmetic operators to evolve a model that takes an image and generates a feature vector. The performance of the proposed method is assessed using six datasets for texture classification with different degrees of rotation and is compared with seven domain-expert designed descriptors. The results show that the proposed method is robust to rotation and has significantly outperformed, or achieved a comparable performance to, the baseline methods.

A Steady-State and Generational Evolutionary Algorithm for Dynamic Multiobjective Optimization

This paper presents a new algorithm, called steady-state and generational evolutionary algorithm, which combines the fast and steadily tracking ability of steady-state algorithms and good diversity preservation of generational algorithms, for handling …

This paper presents a new algorithm, called steady-state and generational evolutionary algorithm, which combines the fast and steadily tracking ability of steady-state algorithms and good diversity preservation of generational algorithms, for handling dynamic multiobjective optimization. Unlike most existing approaches for dynamic multiobjective optimization, the proposed algorithm detects environmental changes and responds to them in a steady-state manner. If a change is detected, it reuses a portion of outdated solutions with good distribution and relocates a number of solutions close to the new Pareto front based on the information collected from previous environments and the new environment. This way, the algorithm can quickly adapt to changing environments and thus is expected to provide a good tracking ability. The proposed algorithm is tested on a number of bi- and three-objective benchmark problems with different dynamic characteristics and difficulties. Experimental results show that the proposed algorithm is very competitive for dynamic multiobjective optimization in comparison with state-of-the-art methods.

The Effect of Information Utilization: Introducing a Novel Guiding Spark in the Fireworks Algorithm

The fireworks algorithm (FWA) is a competitive swarm intelligence algorithm which has been shown to be very useful in many applications. In this paper, a novel guiding spark (GS) is introduced to further improve its performance by enhancing the information utilization in the FWA. The idea is to use the objective function’s information acquired by explosion sparks to construct a guiding vector (GV) with promising direction and adaptive length, and to generate an elite solution called a GS by adding the GV to the position of the firework. The FWA with GS is called the guided FWA (GFWA). Experimental results show that the GS contributes greatly to both exploration and exploitation of the GFWA. The GFWA outperforms previous versions of the FWA and other swarm and evolutionary algorithms on a large variety of test functions and it is also a useful method for large scale optimization. The principle of the GS is very simple but efficient, which can be easily transplanted to other population-based algorithms.

The fireworks algorithm (FWA) is a competitive swarm intelligence algorithm which has been shown to be very useful in many applications. In this paper, a novel guiding spark (GS) is introduced to further improve its performance by enhancing the information utilization in the FWA. The idea is to use the objective function’s information acquired by explosion sparks to construct a guiding vector (GV) with promising direction and adaptive length, and to generate an elite solution called a GS by adding the GV to the position of the firework. The FWA with GS is called the guided FWA (GFWA). Experimental results show that the GS contributes greatly to both exploration and exploitation of the GFWA. The GFWA outperforms previous versions of the FWA and other swarm and evolutionary algorithms on a large variety of test functions and it is also a useful method for large scale optimization. The principle of the GS is very simple but efficient, which can be easily transplanted to other population-based algorithms.

Efficient Use of Partially Converged Simulations in Evolutionary Optimization

For many real-world optimization problems, evaluating a solution involves running a computationally expensive simulation model. This makes it challenging to use evolutionary algorithms that usually have to evaluate thousands of solutions before converg…

For many real-world optimization problems, evaluating a solution involves running a computationally expensive simulation model. This makes it challenging to use evolutionary algorithms that usually have to evaluate thousands of solutions before converging. On the other hand, in many cases, even a prematurely stopped run of the simulation may serve as a cheaper, albeit less accurate (low fidelity), estimate of the true fitness value. For evolutionary optimization, this opens up the opportunity to decide about the simulation run length for each individual. In this paper, we propose a mechanism that is capable of learning the appropriate simulation run length for each solution. To test our approach, we propose two new benchmark problems, one simple artificial benchmark function and one benchmark based on a computational fluid dynamics (CFDs) simulation scenario to design a toy submarine. As we demonstrate, our proposed algorithm finds good solutions much more quickly than always using the full CFDs simulation and provides much better solution quality than a strategy of progressively increasing the fidelity level over the course of optimization.

Artificial Immune System With Local Feature Selection for Short-Term Load Forecasting

In this paper, a new forecasting model based on artificial immune system (AIS) is proposed. The model is used for short-term electrical load forecasting as an example of forecasting time series with multiple seasonal cycles. Artificial immune system le…

In this paper, a new forecasting model based on artificial immune system (AIS) is proposed. The model is used for short-term electrical load forecasting as an example of forecasting time series with multiple seasonal cycles. Artificial immune system learns to recognize antigens (AGs) representing two fragments of the time series: 1) fragment preceding the forecast (input vector) and 2) forecasted fragment (output vector). Antibodies as recognition units recognize AGs by selected features of input vectors and learn output vectors. In the test procedure, new AG with only input vector is recognized by some antibodies (ABs). Its output vector is reconstructed from activated ABs. The unique feature of the proposed AIS is the embedded property of local feature selection. Each AB learns in the clonal selection process its optimal subset of features (a paratope) to improve its recognition and prediction abilities. In the simulation studies the proposed model was tested on real power system data and compared with other AIS-based forecasting models as well as neural networks, autoregressive integrated moving average, and exponential smoothing. The obtained results confirm good performance of the proposed model.