GPEM 22(4) is now available

The fourth issue of Volume 22 of Genetic Programming and Evolvable Machines is now available for download. This is a Special Issue on Highlights of Genetic Programming 2020 Events, edited by Miguel Nicolau, Ting Hu, Mengjie Zhang, and Nuno Lourenço.It …

The fourth issue of Volume 22 of Genetic Programming and Evolvable Machines is now available for download. This is a Special Issue on Highlights of Genetic Programming 2020 Events, edited by Miguel Nicolau, Ting Hu, Mengjie Zhang, and Nuno Lourenço.

It contains:

Highlights of genetic programming 2020 events
by Miguel Nicolau

Evolving continuous optimisers from scratch
by Michael A. Lones

Evolutionary algorithms for designing reversible cellular automata
by Luca Mariot, Stjepan Picek, Domagoj Jakobovic & Alberto Leporati 

A semantic genetic programming framework based on dynamic targets
by Stefano Ruberto, Valerio Terragni & Jason H. Moore 

Relationships between parent selection methods, looping constructs, and success rate in genetic programming
by Anil Kumar Saini and Lee Spector

EvoStencils: a grammar-based genetic programming approach for constructing efficient geometric multigrid methods
by Jonas Schmitt, Sebastian Kuckuk & Harald Köstler 

Semantically-oriented mutation operator in cartesian genetic programming for evolutionary circuit design
by David Hodan, Vojtech Mrazek & Zdenek Vasicek 

Evolving hierarchical memory-prediction machines in multi-task reinforcement learning
by Stephen Kelly, Tatiana Voegerl, Wolfgang Banzhaf & Cedric Gondro 

Graph representations in genetic programming
by Léo Françoso Dal Piccol Sotto, Paul Kaufmann, Timothy Atkinson, Roman Kalkreuth & Márcio Porto Basgalupp 

GPEM 22(4) is now available

The fourth issue of Volume 22 of Genetic Programming and Evolvable Machines is now available for download. This is a Special Issue on Highlights of Genetic Programming 2020 Events, edited by Miguel Nicolau, Ting Hu, Mengjie Zhang, and Nuno Lourenço.It …

The fourth issue of Volume 22 of Genetic Programming and Evolvable Machines is now available for download. This is a Special Issue on Highlights of Genetic Programming 2020 Events, edited by Miguel Nicolau, Ting Hu, Mengjie Zhang, and Nuno Lourenço.

It contains:

Highlights of genetic programming 2020 events
by Miguel Nicolau

Evolving continuous optimisers from scratch
by Michael A. Lones

Evolutionary algorithms for designing reversible cellular automata
by Luca Mariot, Stjepan Picek, Domagoj Jakobovic & Alberto Leporati 

A semantic genetic programming framework based on dynamic targets
by Stefano Ruberto, Valerio Terragni & Jason H. Moore 

Relationships between parent selection methods, looping constructs, and success rate in genetic programming
by Anil Kumar Saini and Lee Spector

EvoStencils: a grammar-based genetic programming approach for constructing efficient geometric multigrid methods
by Jonas Schmitt, Sebastian Kuckuk & Harald Köstler 

Semantically-oriented mutation operator in cartesian genetic programming for evolutionary circuit design
by David Hodan, Vojtech Mrazek & Zdenek Vasicek 

Evolving hierarchical memory-prediction machines in multi-task reinforcement learning
by Stephen Kelly, Tatiana Voegerl, Wolfgang Banzhaf & Cedric Gondro 

Graph representations in genetic programming
by Léo Françoso Dal Piccol Sotto, Paul Kaufmann, Timothy Atkinson, Roman Kalkreuth & Márcio Porto Basgalupp 

Table of contents

Presents the table of contents for this issue of the publication.

Presents the table of contents for this issue of the publication.

Adaptive Genetic Algorithm-Aided Neural Network With Channel State Information Tensor Decomposition for Indoor Localization

Channel state information (CSI) can provide phase and amplitude of multichannel subcarrier to better describe signal propagation characteristics. Therefore, CSI has become one of the most commonly used features in indoor Wi-Fi localization. In addition…

Channel state information (CSI) can provide phase and amplitude of multichannel subcarrier to better describe signal propagation characteristics. Therefore, CSI has become one of the most commonly used features in indoor Wi-Fi localization. In addition, compared to the CSI geometric localization method, the CSI fingerprint localization method has the advantages of easy implementation and high accuracy. However, as the scale of the fingerprint database increases, the training cost and processing complexity of CSI fingerprints will also greatly increase. Based on this, this article proposes to combine backpropagation neural network (BPNN) and adaptive genetic algorithm (AGA) with CSI tensor decomposition for indoor Wi-Fi fingerprint localization. Specifically, the tensor decomposition algorithm based on the parallel factor (PARAFAC) analysis model and the alternate least squares (ALSs) iterative algorithm are combined to reduce the interference of the environment. Then, we use the tensor wavelet decomposition algorithm for feature extraction and obtain the CSI fingerprint. Finally, in order to find the optimal weights and thresholds and then obtain the estimated location coordinates, we introduce an AGA to optimize BPNN. The experimental results show that the proposed algorithm has high localization accuracy, while improving the data processing ability and fitting the nonlinear relationship between CSI location fingerprints and location coordinates.

TechRxiv: Share Your Preprint Research with the World!

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

A Survey on Evolutionary Construction of Deep Neural Networks

Automated construction of deep neural networks (DNNs) has become a research hot spot nowadays because DNN’s performance is heavily influenced by its architecture and parameters, which are highly task-dependent, but it is notoriously difficult to…

Automated construction of deep neural networks (DNNs) has become a research hot spot nowadays because DNN’s performance is heavily influenced by its architecture and parameters, which are highly task-dependent, but it is notoriously difficult to find the most appropriate DNN in terms of architecture and parameters to best solve a given task. In this work, we provide an insight into the automated DNN construction process by formulating it into a multilevel multiobjective large-scale optimization problem with constraints, where the nonconvex, nondifferentiable, and black-box nature of this problem make evolutionary algorithms (EAs) to stand out as a promising solver. Then, we give a systematical review of existing evolutionary DNN construction techniques from different aspects of this optimization problem and analyze the pros and cons of using EA-based methods in each aspect. This work aims to help DNN researchers to better understand why, where, and how to utilize EAs for automated DNN construction and meanwhile, help EA researchers to better understand the task of automated DNN construction so that they may focus more on EA-favored optimization scenarios to devise more effective techniques.

Empirical Comparison of Search Heuristics for Genetic Improvement of Software

Genetic improvement (GI) uses automated search to improve existing software. It has been successfully used to optimize various program properties, such as runtime or energy consumption, as well as for the purpose of bug fixing. GI typically navigates a…

Genetic improvement (GI) uses automated search to improve existing software. It has been successfully used to optimize various program properties, such as runtime or energy consumption, as well as for the purpose of bug fixing. GI typically navigates a space of thousands of patches in search for the program mutation that best improves the desired software property. While genetic programming (GP) has been dominantly used as the search strategy, more recently other search strategies, such as local search, have been tried. It is, however, still unclear which strategy is the most effective and efficient. In this article, we conduct an in-depth empirical comparison of a total of 18 search processes using a set of eight improvement scenarios. Additionally, we also provide new GI benchmarks and we report on new software patches found. Our results show that, overall, local search approaches achieve better effectiveness and efficiency than GP approaches. Moreover, improvements were found in all scenarios (between 15% and 68%). A replication package can be found online: https://github.com/bloa/tevc_2020_artefact.

Guest Editorial Evolutionary Computation Meets Deep Learning

Deep learning is a timely research direction in machine learning, where breakthrough progress has been made in both academe and industries, bringing promising results in speech recognition, computer vision, industrial control and automation, etc. The m…

Deep learning is a timely research direction in machine learning, where breakthrough progress has been made in both academe and industries, bringing promising results in speech recognition, computer vision, industrial control and automation, etc. The motivation of deep learning is primarily to establish a model to simulate the neural connection structure of the human brain. While dealing with complex tasks, deep learning adopts a number of transformation stages to deliver the in-depth description and interpretation of the data. Deep learning achieves exceptional power and flexibility by learning to represent the task through a nested hierarchy of layers, with more abstract representations formed successively in terms of less abstract ones. One of the key issues of existing deep learning approaches is that the meaningful representations can be learned only when their hyperparameter settings are properly specified beforehand, and general parameters are learned during the training process. Until now, not much research has been dedicated to automatically set the hyperparameters, and accurately find the globally optimal general parameters. However, this problem can be formulated as optimization problems, including discrete optimization, constrained optimization, large-scale global optimization, and multiobjective optimization, by engaging mechanisms of evolutionary computation.

Evolving Deep Convolutional Variational Autoencoders for Image Classification

Variational autoencoders (VAEs) have demonstrated their superiority in unsupervised learning for image processing in recent years. The performance of the VAEs highly depends on their architectures, which are often handcrafted by the human expertise in …

Variational autoencoders (VAEs) have demonstrated their superiority in unsupervised learning for image processing in recent years. The performance of the VAEs highly depends on their architectures, which are often handcrafted by the human expertise in deep neural networks (DNNs). However, such expertise is not necessarily available to each of the end users interested. In this article, we propose a novel method to automatically design optimal architectures of VAEs for image classification, called evolving deep convolutional VAE (EvoVAE), based on a genetic algorithm (GA). In the proposed EvoVAE algorithm, the traditional VAEs are first generalized to a more generic and asymmetrical one with four different blocks, and then a variable-length gene encoding mechanism of the GA is presented to search for the optimal network depth. Furthermore, an effective genetic operator is designed to adapt to the proposed variable-length gene encoding strategy. To verify the performance of the proposed algorithm, nine variants of AEs and VAEs are chosen as the peer competitors to perform the comparisons on MNIST, street view house numbers, and CIFAR-10 benchmark datasets. The experiments reveal the superiority of the proposed EvoVAE algorithm, which wins 21 times out of the 24 comparisons and outperforms the best competitors by 1.39%, 14.21%, and 13.03% on the three benchmark datasets, respectively.