GPEM 23(1) is now available

The first issue of Volume 23 of Genetic Programming and Evolvable Machines is now available for download.It contains:Editorial introductionby Lee SpectorAcknowledgment to reviewers (2021)by Lee SpectorInference of time series components by online co-ev…

The first issue of Volume 23 of Genetic Programming and Evolvable Machines is now available for download.

It contains:

Editorial introduction
by Lee Spector

Acknowledgment to reviewers (2021)
by Lee Spector

Inference of time series components by online co-evolution
by Danil Koryakin, Sebastian Otte, Martin V. Butz

Constant optimization and feature standardization in multiobjective genetic programming
by Peter Rockett

Genetic programming convergence
by W. B. Langdon

Automatic generation of regular expressions for the Regex Golf challenge using a local search algorithm
by André Almeida Farzat, Márcio Oliveira Barros

Generating networks of genetic processors
by Marcelino Campos, José M. Sempere

BOOK REVIEW
Robert Elliott Smith: Rage Inside the Machine—the prejudice of algorithms, and how to stop the internet making bigots of us all
by Walid Magdy

BOOK REVIEW
Artificial intelligence for fashion, Leanne Luce, Apress 2019, ISBN 978-1-4842-3930-8 how AI is revolutionizing the fashion industry
by Grace Buttler

Table of Contents

Presents the table of contents for this issue of the publication.

Presents the table of contents for this issue of the publication.

Solving Multitask Optimization Problems With Adaptive Knowledge Transfer via Anomaly Detection

Evolutionary multitask optimization (EMTO) has recently attracted widespread attention in the evolutionary computation community, which solves two or more tasks simultaneously to improve the convergence characteristics of tasks when individually optimi…

Evolutionary multitask optimization (EMTO) has recently attracted widespread attention in the evolutionary computation community, which solves two or more tasks simultaneously to improve the convergence characteristics of tasks when individually optimized. Effective knowledge between tasks is transferred by taking advantage of the parallelism of population-based search. Without any prior knowledge about tasks, it is a challenging problem of how to adaptively transfer effective knowledge between tasks and reduce the impact of negative transfer in EMTO. However, these two issues are rarely studied simultaneously in the existing literature. Besides, in complex many-task environments, the potential relationships among individuals from highly diverse populations associated with tasks directly determine the effectiveness of cross-task knowledge transfer. Keeping those in mind, we propose a multitask evolutionary algorithm based on anomaly detection (MTEA-AD). Specifically, each task is assigned a population and an anomaly detection model. Each anomaly detection model is used to learn the relationship among individuals between the current task and the other tasks online. Individuals that may carry negative knowledge are identified as outliers, and candidate transferred individuals identified by the anomaly detection model are selected to assist the current task, which may carry common knowledge across the current task and other tasks. Furthermore, to realize the adaptive control of the degree of knowledge transfer, the successfully transferred individuals that survive to the next generation through the elitism are used to update the anomaly detection parameter. The fair competition between offspring and candidate transferred individuals can effectively reduce the risk of negative transfer. Finally, the empirical studies on a series of synthetic benchmarks and a practical study are conducted to verify the effectiveness of MTEA-AD. The experimental results demonstrate that –
ur proposal can adaptively adjust the degree of knowledge transfer through the anomaly detection model to achieve highly competitive performance compared to several state-of-the-art EMTO methods.

TechRxiv: Share Your Preprint Research with the World!

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Evolutionary Multitask Optimization With Adaptive Knowledge Transfer

Evolutionary multitask optimization (EMTO) studies how to simultaneously solve multiple optimization tasks via evolutionary algorithms (EAs) while making the useful knowledge acquired from solving one task to assist solving other tasks, aiming to impro…

Evolutionary multitask optimization (EMTO) studies how to simultaneously solve multiple optimization tasks via evolutionary algorithms (EAs) while making the useful knowledge acquired from solving one task to assist solving other tasks, aiming to improve the overall performance of solving each individual task. Recent years have seen a large body of EMTO works based on different kinds of EAs and studying one or more aspects in how to represent, extract, transfer, and reuse knowledge. A key challenge to EMTO is the occurrence of negative knowledge transfer between tasks, which becomes severer when the total number of tasks increases. To address this issue, we propose an adaptive EMTO (AEMTO) framework. This framework can adapt knowledge transfer frequency, knowledge source selection, and knowledge transfer intensity in a synergistic way to make the best use of knowledge transfer, especially when facing many tasks. We implement the proposed AEMTO framework and evaluate our implementation on three suites of MTO problems with 2, 10, and 50 tasks and one real-world MTO problem with 2000 tasks in comparison to several state-of-the-art EMTO methods with certain adaptation strategies regarding knowledge transfer and the single-task optimization counterpart of the proposed method. Experimental results have demonstrated the effectiveness of the adaptive knowledge transfer strategies used in AEMTO and the overall performance superiority of AEMTO.

A Multifidelity Approach for Bilevel Optimization With Limited Computing Budget

Bilevel optimization refers to a specialized class of problems where the optimum of an upper level (UL) problem is sought subject to the optimality of a nested lower level (LL) problem as a constraint. This nested structure necessitates a large number …

Bilevel optimization refers to a specialized class of problems where the optimum of an upper level (UL) problem is sought subject to the optimality of a nested lower level (LL) problem as a constraint. This nested structure necessitates a large number of function evaluations for the solution methods, especially population-based metaheuristics such as evolutionary algorithms (EAs). Reducing this effort remains critical for practical uptake of bilevel EAs, particularly for computationally expensive problems where each solution evaluation may involve a significant cost. This letter aims to contribute toward this field by a novel and previously unexplored proposition that bilevel optimization problems can be posed as multifidelity optimization problems. The underpinning idea is that an informed judgment of how accurate the LL optimum estimate should be to confidently determine its ranking can significantly cut down redundant evaluations during the search. Toward this end, we propose an algorithm which learns the appropriate fidelity to evaluate a solution during the search based on the seen data, instead of resorting to an exhaustive LL optimization. Numerical experiments are conducted on a range of standard as well as more complex variants of the SMD test problems to demonstrate the advantages of the proposed approach when compared to state-of-the-art surrogate-assisted algorithms.

Guest Editorial Special Issue on Multitask Evolutionary Computation

It is our pleasure to introduce this special issue on multitask evolutionary computation (MTEC), focusing on novel methodologies and applications of evolutionary algorithms (EAs) crafted to perform multiple search and optimization tasks jointly. EAs ar…

It is our pleasure to introduce this special issue on multitask evolutionary computation (MTEC), focusing on novel methodologies and applications of evolutionary algorithms (EAs) crafted to perform multiple search and optimization tasks jointly. EAs are population-based methods inspired by principles of natural evolution that have provided a gradient-free path to solving complex learning and optimization problems. However, unlike the natural world where evolution has engendered diverse species and produced differently skilled subpopulations, in silico EAs are typically designed to evolve a set of solutions specialized for just a single target task. This convention of problem solving in isolation tends to curtail the power of implicit parallelism of a population. Skills evolved for a given problem instance do not naturally transfer to populations tasked to solve another. Hence, convergence rates remain restrained, even in settings where related tasks with overlapping search spaces, similar optimal solutions, or with other forms of reusable information, are routinely recurring.

Multitask Shape Optimization Using a 3-D Point Cloud Autoencoder as Unified Representation

The choice of design representations, as of search operators, is central to the performance of evolutionary optimization algorithms, in particular, for multitask problems. The multitask approach pushes further the parallelization aspect of these algori…

The choice of design representations, as of search operators, is central to the performance of evolutionary optimization algorithms, in particular, for multitask problems. The multitask approach pushes further the parallelization aspect of these algorithms by solving simultaneously multiple optimization tasks using a single population. During the search, the operators implicitly transfer knowledge between solutions to the offspring, taking advantage of potential synergies between problems to drive the solutions to optimality. Nevertheless, in order to operate on the individuals, the design space of each task has to be mapped to a common search space, which is challenging in engineering cases without clear semantic overlap between parameters. Here, we apply a 3-D point cloud autoencoder to map the representations from the Cartesian to a unified design representation: the latent space of the autoencoder. The transfer of latent space features between design representations allows the reconstruction of shapes with interpolated characteristics and maintenance of common parts, which potentially improves the performance of the designs in one or more tasks during the optimization. Compared to traditional representations for shape optimization, such as free-form deformation, the latent representation enables more representative design modifications, while keeping the baseline characteristics of the learned classes of objects. We demonstrate the efficiency of our approach in an optimization scenario where we minimize the aerodynamic drag of two different car shapes with common underbodies for cost-efficient vehicle platform design.

Learning and Sharing: A Multitask Genetic Programming Approach to Image Feature Learning

Using evolutionary computation algorithms to solve multiple tasks with knowledge sharing is a promising approach. Image feature learning can be considered as a multitask learning problem because different tasks may have a similar feature space. Genetic…

Using evolutionary computation algorithms to solve multiple tasks with knowledge sharing is a promising approach. Image feature learning can be considered as a multitask learning problem because different tasks may have a similar feature space. Genetic programming (GP) has been successfully applied to image feature learning for classification. However, most of the existing GP methods solve one task, independently, using sufficient training data. No multitask GP method has been developed for image feature learning. Therefore, this article develops a multitask GP approach to image feature learning for classification with limited training data. Owing to the flexible representation of GP, a new knowledge sharing mechanism based on a new individual representation is developed to allow GP to automatically learn what to share across two tasks and to improve its learning performance. The shared knowledge is encoded as a common tree, which can represent the common/general features of two tasks. With the new individual representation, each task is solved using the features extracted from a common tree and a task-specific tree representing task-specific features. To find the best common and task-specific trees, a new evolutionary search process and fitness functions are developed. The performance of the new approach is examined on six multitask learning problems of 12 image classification datasets with limited training data and compared with 17 competitive methods. The experimental results show that the new approach outperforms these comparison methods in almost all the comparisons. Further analysis reveals that the new approach learns simple yet effective common trees with high effectiveness and transferability.