Dual-Surrogate-Assisted Cooperative Particle Swarm Optimization for Expensive Multimodal Problems

Various real-world applications can be classified as expensive multimodal optimization problems. When surrogate-assisted evolutionary algorithms (SAEAs) are employed to tackle these problems, they not only face a contradiction between the precision of …

Various real-world applications can be classified as expensive multimodal optimization problems. When surrogate-assisted evolutionary algorithms (SAEAs) are employed to tackle these problems, they not only face a contradiction between the precision of surrogate models and the cost of individual evaluations but also have the difficulty that surrogate models and problem modalities are hard to match. To address this issue, this article studies a dual-surrogate-assisted cooperative particle swarm optimization algorithm to seek multiple optimal solutions. A dual-population cooperative particle swarm optimizer is first developed to simultaneously explore/exploit multiple modalities. Following that, a modal-guided dual-layer cooperative surrogate model, which contains one upper global surrogate model and a group of lower local surrogate models, is constructed with the purpose of reducing the individual evaluation cost. Moreover, a hybrid strategy based on clustering and peak-valley is proposed to detect new modalities. Compared with five existing SAEAs and seven multimodal evolutionary algorithms, the proposed algorithm can simultaneously obtain multiple highly competitive optimal solutions at a low computational cost according to the experimental results of testing both 11 benchmark instances and the building energy conservation problem.

Region Encoding Helps Evolutionary Computation Evolve Faster: A New Solution Encoding Scheme in Particle Swarm for Large-Scale Optimization

In the last decade, many evolutionary computation (EC) algorithms with diversity enhancement have been proposed to solve large-scale optimization problems in big data era. Among them, the social learning particle swarm optimization (SLPSO) has shown go…

In the last decade, many evolutionary computation (EC) algorithms with diversity enhancement have been proposed to solve large-scale optimization problems in big data era. Among them, the social learning particle swarm optimization (SLPSO) has shown good performance. However, as SLPSO uses different guidance information for different particles to maintain the diversity, it often results in slow convergence speed. Therefore, this article proposes a new region encoding scheme (RES) to extend the solution representation from a single point to a region, which can help EC algorithms evolve faster. The RES is generic for EC algorithms and is applied to SLPSO. Based on RES, a novel adaptive region search (ARS) is designed to on the one hand keep the diversity of SLPSO and on the other hand accelerate the convergence speed, forming the SLPSO with ARS (SLPSO-ARS). In SLPSO-ARS, each particle is encoded as a region so that some of the best (e.g., the top ${P}$ ) particles can carry out region search to search for better solutions near their current positions. The ARS strategy offers the particle a greater chance to discover the nearby optimal solutions and helps to accelerate the convergence speed of the whole population. Moreover, the region radius is adaptively controlled based on the search information. Comprehensive experiments on all the problems in both IEEE Congress on Evolutionary Computation 2010 (CEC 2010) and 2013 (CEC 2013) competitions are conducted to validate the effectiveness and efficiency of SLPSO-ARS and to investigate its important parameters and components. The experimental results show that SLPSO-ARS can achieve generally better performance than the compared algorithms.

GPEM 22(2) is now available

The second issue of Volume 22 of Genetic Programming and Evolvable Machines is now available for download.It contains:Discovering novel memory cell designs for sentiment analysis on tweetsby Sergiu Cosmin Nistor, Mircea Moca, Răzvan Liviu NistorAn enha…

The second issue of Volume 22 of Genetic Programming and Evolvable Machines is now available for download.
It contains:
Discovering novel memory cell designs for sentiment analysis on tweets
by Sergiu Cosmin Nistor, Mircea Moca, Răzvan Liviu Nistor
An enhanced Huffman-PSO based image optimization algorithm for image steganography
by Neha Sharma, Usha Batra
TPOT-NN: augmenting tree-based automated machine learning with neural network estimators
by Joseph D. Romano, Trang T. Le, Weixuan Fu, Jason H. Moore
Efficiency improvement of genetic network programming by tasks decomposition in different types of environments
by Mohamad Roshanzamir, Maziar Palhang, Abdolreza Mirzaei

TechRxiv: Share Your Preprint Research with the World!

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

A Multipopulation Evolutionary Algorithm for Solving Large-Scale Multimodal Multiobjective Optimization Problems

Multimodal multiobjective optimization problems (MMOPs) widely exist in real-world applications, which have multiple equivalent Pareto-optimal solutions that are similar in the objective space but totally different in the decision space. While some evo…

Multimodal multiobjective optimization problems (MMOPs) widely exist in real-world applications, which have multiple equivalent Pareto-optimal solutions that are similar in the objective space but totally different in the decision space. While some evolutionary algorithms (EAs) have been developed to find the equivalent Pareto-optimal solutions in recent years, they are ineffective to handle large-scale MMOPs having a large number of variables. This article thus proposes an EA for solving large-scale MMOPs with sparse Pareto-optimal solutions, i.e., most variables in the optimal solutions are 0. The proposed algorithm explores different regions of the decision space via multiple subpopulations and guides the search behavior of the subpopulations via adaptively updated guiding vectors. The guiding vector for each subpopulation not only provides efficient convergence in the huge search space but also differentiates its search direction from others to handle the multimodality. While most existing EAs solve MMOPs with 2–7 decision variables, the proposed algorithm is shown to be effective for benchmark MMOPs with up to 500 decision variables. Moreover, the proposed algorithm also produces a better result than state-of-the-art methods for the neural architecture search.

Preserving Population Diversity Based on Transformed Semantics in Genetic Programming for Symbolic Regression

Population diversity plays an important role in avoiding premature convergence in evolutionary techniques including genetic programming (GP). Obtaining an adequate level of diversity during the evolutionary process has became a concern of many previous…

Population diversity plays an important role in avoiding premature convergence in evolutionary techniques including genetic programming (GP). Obtaining an adequate level of diversity during the evolutionary process has became a concern of many previous researches in GP. This work proposes a new novelty metric for entropy-based diversity measure for GP. The new novelty metric is based on the transformed semantics of models in GP, where the semantics are the set of outputs of a model on the training data and principal component analysis is used for a transformation of the semantics. Based on the new novelty metric, a new diversity preserving framework, which incorporates a new fitness function and a new selection operator, is proposed to help GP achieve a good balance between the exploration and the exploitation, thus enhancing its learning and generalization performance. Compared with two stat-of-the-art diversity preserving methods, the new method can generalize better and reduce the overfitting trend more effectively in most cases. Further examinations on the properties of the search process confirm that the new framework notably enhances the evolvability and locality of GP.

Paired Offspring Generation for Constrained Large-Scale Multiobjective Optimization

Constrained multiobjective optimization problems (CMOPs) widely exist in real-world applications, and they are challenging for conventional evolutionary algorithms (EAs) due to the existence of multiple constraints and objectives. When the number of ob…

Constrained multiobjective optimization problems (CMOPs) widely exist in real-world applications, and they are challenging for conventional evolutionary algorithms (EAs) due to the existence of multiple constraints and objectives. When the number of objectives or decision variables is scaled up in CMOPs, the performance of EAs may degenerate dramatically and may fail to obtain any feasible solutions. To address this issue, we propose a paired offspring generation-based multiobjective EA for constrained large-scale optimization. The general idea is to emphasize the role of offspring generation in reproducing some promising feasible or useful infeasible offspring solutions. We first adopt a small set of reference vectors for constructing several subpopulations with a fixed number of neighborhood solutions. Then, a pairing strategy is adopted to determine some pairwise parent solutions for offspring generation. Consequently, the pairwise parent solutions, which could be infeasible, may guide the generation of well-converged solutions to cross the infeasible region(s) effectively. The proposed algorithm is evaluated on CMOPs with up to 1000 decision variables and ten objectives. Moreover, each component in the proposed algorithm is examined in terms of its effect on the overall algorithmic performance. Experimental results on a variety of existing and our tailored test problems demonstrate the effectiveness of the proposed algorithm in constrained large-scale multiobjective optimization.