The third issue of Volume 17 of Genetic Programming and Evolvable Machines is now available for download.
“Learning to rank: new approach with the layered multi-population genetic programming on click-through features”
by Amir Hosein Keyhanipour, Behzad Moshiri, Farhad Oroumchian, Maseud Rahgozar & Kambiz Badie
“Prediction of the natural gas consumption in chemical processing facilities with genetic programming”
by Miha Kovačič & Franjo Dolenc
“Unveiling the properties of structured grammatical evolution”
by Nuno Lourenço, Francisco B. Pereira & Ernesto Costa
“An automatic solver for very large jigsaw puzzles using genetic algorithms”
by Dror Sholomon, Omid E. David & Nathan S. Netanyahu
“Mike Preuss: Multimodal optimization by means of evolutionary algorithms”
by Nailah Al-Madi
Wolfgang Banzhaf and Lidia Yamamoto: Artificial Chemistries, MIT Press, 2015
by Kyle I. S. Harrington
This paper proposes a new multiobjective evolutionary algorithm for handling the problem of distribution network restoration after failures. The problem is formulated as a bi-objective optimization problem considering the total load restored and the time required for restoration. A new encoding scheme is proposed, in which the variables that encode the switch operation are separated into six groups, according to their roles in the faulty system configuration. Employing the idea of defining subspaces of a combinatorial space, those groups are used in order to define subpermutations within which the crossover and mutation operations are performed. In this way, the dimensionality of the search space becomes reduced, allowing a much more efficient search. The proposed encoding scheme also makes a single individual to encode several different solutions, leading to a further reduction of the search space dimensionality. Due to this peculiar feature of the encoding scheme, it becomes convenient to use an adaptation of the Strength Pareto Evolutionary Algorithm 2, in which the raw fitness is modified in order to allow the assignment of fitness to individuals that simultaneously encode several different solutions. The proposed algorithm was implemented such that good solutions are delivered within low processing times, of the order of some minutes for large real systems.
In-vitro methods for protein structure determination are time-consuming, cost-intensive, and failure-prone. Because of these expenses, alternative computer-based predictive methods have emerged. Predicting a protein’s 3-D structure from only its amino acid sequence—also known as ab initio protein structure prediction (PSP)—is computationally demanding because the search space is astronomically large and energy models are extremely complex. Some successes have been achieved in predictive methods but these are limited to small sized proteins (around 100 amino acids); thus, developing efficient algorithms, reducing the search space, and designing effective search guidance heuristics are necessary to study large sized proteins. An on-lattice model can be a better ground for rapidly developing and measuring the performance of a new algorithm, and hence we consider this model for larger proteins (>150 amino acids) to enhance the genetic algorithms (GAs) framework. In this paper, we formulate PSP as a combinatorial optimization problem that uses 3-D face-centered-cubic lattice coordinates to reduce the search space and hydrophobic-polar energy model to guide the search. The whole optimization process is controlled by an enhanced GA framework with four enhanced features: 1) an exhaustive generation approach to diversify the search; 2) a novel hydrophobic core-directed macro-mutation operator to intensify the search; 3) a per-generation duplication elimination strategy to prevent early convergence; and 4) a random-walk technique to recover from stagnation. On a set of standard benchmark proteins, our algorithm significantly outperforms state-of-the-art algorithms. We also experimentally show that our algorithm is robust enough to produce very similar results regardless of different parameter settings.
Natural ecosystems exhibit complex dynamics of interacting species. Man-made ecosystems exhibit similar dynamics and, in the case of mobile app stores, can be said to perform optimization as developers seek to maximize app downloads. This work aims to understand stability and instability within app store dynamics and how it affects fitness. The investigation is carried out with AppEco, a model of the iOS App Store, which was extended for this paper and updated to model the store from 2008 to 2014. AppEco models apps containing features, developers who build the apps, users who download apps according to their preferences, and an app store that presents apps to the users. It also models developers who use commonly observed strategies to build their apps: innovator, milker, optimizer, copycat, and flexible (the ability to choose any strategy). Results show that despite the success of the copycat strategy, there is a clear stable state for low proportion of copycats in developer populations, mirroring results in theoretical biology for producer–scrounger systems. The results also show that the best fitness is achieved when the evolutionary optimizer (as producer) and copycat (as scrounger) strategies coexist together in stable proportions.
Multipopulation methods are effective in solving dynamic optimization problems. However, to efficiently track multiple optima, algorithm designers need to address a key issue: how to adapt the number of populations. In this paper, an adaptive multipopulation framework is proposed to address this issue. A database is designed to collect heuristic information of algorithm behavior changes. The number of populations is adjusted according to statistical information related to the current evolving status in the database and a heuristic value. Several other techniques are also introduced, including a heuristic clustering method, a population exclusion scheme, a population hibernation scheme, two movement schemes, and a peak hiding method. The particle swarm optimization and differential evolution algorithms are implemented into the framework, respectively. A set of multipopulation-based algorithms are chosen to compare with the proposed algorithms on the moving peaks benchmark using four different performance measures. The effect of the components of the framework is also investigated based on a set of multimodal problems in static environments. Experimental results show that the proposed algorithms outperform the other algorithms in most scenarios.
Set-based multiobjective optimization methods, such as evolutionary multiobjective optimization (EMO) methods, attempt to find a set of Pareto-optimal solutions, instead of a single optimal solution. To evaluate these algorithms for their convergence to the efficient set in multiobjective optimization problems, the current performance metrics require the knowledge of the true Pareto-optimal solutions. In this paper, we develop a theoretically motivated Karush–Kuhn–Tucker proximity measure (KKTPM) that can provide an estimate of the proximity of a set of tradeoff solutions from the true Pareto-optimal solutions without any prior knowledge. Besides theoretical development of the proposed metric, the proposed KKTPM is computed for iteration-wise tradeoff solutions obtained from specific EMO algorithms on two-, three-, five-, and ten-objective optimization problems. Results amply indicate the usefulness of the proposed KKTPM as a metric for evaluating different sets of tradeoff solutions and also as a possible termination criterion for an EMO algorithm. Other possible uses of the proposed metric are also highlighted.
Decomposition-based multiobjective evolutionary algorithms (MOEAs) have been studied a lot and have been widely and successfully used in practice. However, there are no related theoretical studies on this kind of MOEAs. In this paper, we theoretically analyze the MOEAs based on decomposition. First, we analyze the runtime complexity with two basic simple instances. In both cases the Pareto front have one-to-one map to the decomposed subproblems or not. Second, we analyze the runtime complexity on two difficult instances with bad neighborhood relations in fitness space or decision space. Our studies show that: 1) in certain cases, polynomial-sized evenly distributed weight parameters-based decomposition cannot map each point in a polynomial sized Pareto front to a subproblem; 2) an ideal serialized algorithm can be very efficient on some simple instances; 3) the standard MOEA based on decomposition can benefit a runtime cut of a constant fraction from its neighborhood coevolution scheme; and 4) the standard MOEA based on decomposition performs well on difficult instances because both the Pareto domination-based and the scalar subproblem-based search schemes are combined in a proper way.
Over the last 25 years, the multiple sequence alignment (MSA) problem has attracted the attention of biologists because it is one of the major techniques used in several areas of computational biology, such as homology searches, genomic annotation, protein structure prediction, gene regulation networks, or functional genomics. This problem implicates the alignment of more than two biological sequences, and is considered as a nondeterministic polynomial time optimization problem. In this paper, we find a number of different approaches for dealing with this biological sequence alignment problem. Basically, we distinguish six main groups: 1) exact methods; 2) progressive methods; 3) consistency-based methods; 4) iterative methods; 5) evolutionary algorithms; and 6) structure-based methods. In this paper, we propose the use of evolutionary computation and multiobjective optimization for solving this bioinformatics problem. A multiobjective version of a memetic metaheuristic is presented: hybrid multiobjective metaheuristics for MSA. In order to prove the effectiveness of the new proposal, we use three structure-based benchmarks created by using empirical data as input. The results obtained by our method are compared with well-known methods published in this paper, concluding that the new approach presents remarkable accuracy when dealing with sets of sequences with a low sequence similarity, the most frequent ones in real world.