The second issue of Volume 18 of Genetic Programming and Evolvable Machines is now available for download.
An iterative genetic programming approach to prototype generation
by José María Valencia-Ramírez, Mario Graff, Hugo Jair Escalante & Jaime Cerda-Jacobo
Recursion in tree-based genetic programming
by Alexandros Agapitos, Michael O’Neill, Ahmed Kattan & Simon M. Lucas
Recurrent Cartesian Genetic Programming of Artificial Neural Networks
by Andrew James Turner & Julian Francis Miller
An analysis of the genetic marker diversity algorithm for genetic programming
by Armand R. Burks & William F. Punch
Solving metameric variable-length optimization problems using genetic algorithms
by Matthew L. Ryerkerk, Ronald C. Averill, Kalyanmoy Deb & Erik D. Goodman
Software review: CGP-Library
by Emerson Carlos Pedrino, Paulo Cesar Donizeti Paris, Denis Pereira de Lima & Valentin Obac Roda
These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.
Interactions between decision variables typically make an optimization problem challenging for an evolutionary algorithm (EA) to solve. Exploratory landscape analysis (ELA) techniques can be used to quantify the level of variable interactions in an optimization problem. However, many studies using ELA techniques to investigate interactions have been limited to combinatorial problems, with very few studies focused on continuous variables. In this paper, we propose a novel ELA measure to quantify the level of variable interactions in continuous optimization problems. We evaluated the efficacy of this measure using a suite of benchmark problems, consisting of 24 multidimensional continuous optimization functions with differing levels of variable interactions. Significantly, the results reveal that our measure is robust and can accurately identify variable interactions. We show that the solution quality found by an EA is correlated with the level of variable interaction in a given problem. Finally, we present the results from simulation experiments illustrating that when our measure is embedded into an algorithm design framework, the enhanced algorithm achieves equal or better results on the benchmark functions.
Describes the above-named upcoming special issue or section. May include topics to be covered or calls for papers.
Sparse unmixing of hyperspectral data is an important technique aiming at estimating the fractional abundances of the end members. Traditional sparse unmixing is faced with the l0-norm problem which is an NP-hard problem. Sparse unmixing is inherently a multiobjective optimization problem. Most of the recent works combine cost functions into single one to construct an aggregate objective function, which involves weighted parameters that are sensitive to different data sets and difficult to tune. In this paper, a novel multiobjective cooperative coevolutionary algorithm is proposed to optimize the reconstruction term, the sparsity term and the total variation regularization term simultaneously. A problem-dependent cooperative coevolutionary strategy is designed because sparse unmixing encounters a large scale optimization problem. The proposed approach optimizes the nonconvex l0-norm problem directly and can find a better compromise between two or more competing cost function terms automatically. Experimental results on simulated and real hyperspectral data sets demonstrate the effectiveness of the proposed method.
Adaptive operator selection (AOS) is a high-level controller for an optimization algorithm that monitors the performance of a set of operators with a credit assignment strategy and adaptively applies the high performing operators with an operator selection strategy. AOS can improve the overall performance of an optimization algorithm across a wide range of problems, and it has shown promise on single-objective problems where defining an appropriate credit assignment that assesses an operator’s impact is relatively straightforward. However, there is currently a lack of AOS for multiobjective problems (MOPs) because defining an appropriate credit assignment is nontrivial for MOPs. To identify and examine the main factors in effective credit assignment strategies, this paper proposes a classification that groups credit assignment strategies by the sets of solutions used to assess an operator’s impact and by the fitness function used to compare those sets of solutions. Nine credit assignment strategies, which include five newly proposed ones, are compared experimentally on standard benchmarking problems. Results show that eight of the nine credit assignment strategies are effective in elevating the generality of a multiobjective evolutionary algorithm and outperforming a random operator selector.
Selection strategy, including mating selection and environmental selection, is a key ingredient in the design of evolutionary multiobjective optimization algorithms. Existing approaches, which have shown competitive performance in low-dimensional multiobjective optimization problems with two or three objectives, often encounter considerable challenges in many-objective optimization, where the number of objectives exceeds 3. This paper first provides a comprehensive analysis on the selection strategies in the current evolutionary many-objective optimization algorithms. Afterward, we propose a coordinated selection strategy to improve the performance of evolutionary algorithms in many-objective optimization. This selection strategy considers three crucial factors: 1) the new mating selection criterion considers both the quality of each selected parent and the effectiveness of the combination of selected parents; 2) the new environmental selection criterion directly focuses on the performance of the whole population rather than single individual alone; and 3) both selection steps are complement to each other and the coordination between them in the evolutionary process can achieve a better performance than each of them used individually. Furthermore, in order to handle the curse of dimensionality in many-objective optimization problems, a new convergence measure by distance and a new diversity measure by angle are developed in both selection steps. Experimental results on both DTLZ and WFG benchmark functions demonstrate the superiority of the proposed algorithm in comparison with six state-of-the-art designs in terms of both solution quality and computational efficiency.
We propose a novel image representation framework based on Gaussian model and evolutionary optimization (EO). In this framework, image patches are categorized into smooth and nonsmooth ones, and the two categories are treated distinctively. For a smooth patch, we formulate it as the summation of a direct component and a variation component (VC). We observe that the values of all VCs in an image can be well fitted by a Gaussian distribution, according to which we present an efficient reconstruction approach based on maximizing the logarithm a posteriori probability. For a nonsmooth patch, we introduce the mechanism of EO to solve a combinatorial optimization over a principal component analysis dictionary. In addition, we develop two approaches for estimating the coefficients of the atoms. Experiment results demonstrate that the proposed framework obtains the state-of-the-art results in several image inverse problems.
Particle swarm optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.
Provides a listing of the editorial board, current staff, committee members and society officers.