Boosting Data-Driven Evolutionary Algorithm With Localized Data Generation

By efficiently building and exploiting surrogates, data-driven evolutionary algorithms (DDEAs) can be very helpful in solving expensive and computationally intensive problems. However, they still often suffer from two difficulties. First, many existing…

By efficiently building and exploiting surrogates, data-driven evolutionary algorithms (DDEAs) can be very helpful in solving expensive and computationally intensive problems. However, they still often suffer from two difficulties. First, many existing methods for building a single ad hoc surrogate are suitable for some special problems but may not work well on some other problems. Second, the optimization accuracy of DDEAs deteriorates if available data are not enough for building accurate surrogates, which is common in expensive optimization problems. To this end, this article proposes a novel DDEA with two efficient components. First, a boosting strategy (BS) is proposed for self-aware model managements, which can iteratively build and combine surrogates to obtain suitable surrogate models for different problems. Second, a localized data generation (LDG) method is proposed to generate synthetic data to alleviate data shortage and increase data quantity, which is achieved by approximating fitness through data positions. By integrating the BS and the LDG, the BDDEA-LDG algorithm is able to improve model accuracy and data quantity at the same time automatically according to the problems at hand. Besides, a tradeoff is empirically considered to strike a better balance between the effectiveness of surrogates and the time cost for building them. The experimental results show that the proposed BDDEA-LDG algorithm can generally outperform both traditional methods without surrogates and other state-of-the-art DDEA son widely used benchmarks and an arterial traffic signal timing real-world optimization problem. Furthermore, the proposed BDDEA-LDG algorithm can use only about 2% computational budgets of traditional methods for producing competitive results.

TechRxiv: Share Your Preprint Research with the World!

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

A Multifactorial Evolutionary Algorithm for Multitasking Under Interval Uncertainties

Various real-world applications with interval uncertainty, such as the path planning of mobile robot, layout of radio frequency identification readers and solar desalination, can be formulated as an interval multiobjective optimization problem (IMOOP),…

Various real-world applications with interval uncertainty, such as the path planning of mobile robot, layout of radio frequency identification readers and solar desalination, can be formulated as an interval multiobjective optimization problem (IMOOP), which is usually transformed into one or a series of certain problems to solve by using evolutionary algorithms. However, a definite characteristic among them is that only a single optimization task can be catched up at a time. Inspired by the multifactorial evolutionary algorithm (MFEA), a novel interval MFEA (IMFEA) is proposed to solve IMOOPs simultaneously using a single population of evolving individuals. In the proposed method, the potential interdependency across related problems can be explored in the unified genotype space, and multitasks of multiobjective interval optimization problems are solved at once by promoting knowledge transfer for the greater synergistic search to improve the convergence speed and the quality of the optimal solution set. Specifically, an interval crowding distance based on shape evaluation is calculated to evaluate the interval solutions more comprehensively. In addition, an interval dominance relationship based on the evolutionary state of the population is designed to obtain the interval confidence level, which considers the difference of average convergence levels and the relative size of the potential possibility between individuals. Correspondingly, the strict transitivity proof of the presented dominance relationship is given. The efficacy of the associated evolutionary algorithm is validated on a series of benchmark test functions, as well as a real-world case of robot path planning with many terrains that provides insight into the performance of the method in the face of IMOOPs.

Combining Simple and Adaptive Monte Carlo Methods for Approximating Hypervolume

The computation of hypervolume is a key issue in multiobjective optimization, particularly, multiobjective evolutionary optimization. However, it is NP-hard to compute the exact hypervolume value. Monte Carlo methods have been widely used for approxima…

The computation of hypervolume is a key issue in multiobjective optimization, particularly, multiobjective evolutionary optimization. However, it is NP-hard to compute the exact hypervolume value. Monte Carlo methods have been widely used for approximating the hypervolume. Observing that the basic Monte Carlo method and the fully polynomial-time randomized approximation scheme (FPRAS) suit different solution sets, we propose a combination of these two methods and show that it performs very well on a number of solution sets.

Multiobjective Evolution Strategy for Dynamic Multiobjective Optimization

This article presents a novel evolution strategy-based evolutionary algorithm, named DMOES, which can efficiently and effectively solve multiobjective optimization problems in dynamic environments. First, an efficient self-adaptive precision controllab…

This article presents a novel evolution strategy-based evolutionary algorithm, named DMOES, which can efficiently and effectively solve multiobjective optimization problems in dynamic environments. First, an efficient self-adaptive precision controllable mutation operator is designed for individuals to explore and exploit the decision space. Second, the simulated isotropic magnetic particles niching can guide the individuals to keep uniform distance and extent to approximate the entire Pareto front automatically. Third, the nondominated solutions (NDS) guided immigration can facilitate the population convergence with two different strategies for the NDSs and the dominated solutions, respectively. As a result, our algorithm can track the new approximate Pareto set and approximate Pareto front as quickly as possible when the environment changes. In addition, DMOES can obtain a well-converged and well-diversified Pareto front with much less population size and far lower computational cost. The larger the number of individuals, the sharper the contour of the resulted approximate Pareto front will be. Finally, the proposed algorithm is evaluated by the FDA, dMOP, UDF, and ZJZ test suites. The experimental results have been demonstrated to provide a competitive and oftentimes better performance when compared against some chosen state-of-the-art dynamic multiobjective evolutionary algorithms.

Variable-Size Cooperative Coevolutionary Particle Swarm Optimization for Feature Selection on High-Dimensional Data

Evolutionary feature selection (FS) methods face the challenge of “curse of dimensionality” when dealing with high-dimensional data. Focusing on this challenge, this article studies a variable-size cooperative coevolutionary particle swar…

Evolutionary feature selection (FS) methods face the challenge of “curse of dimensionality” when dealing with high-dimensional data. Focusing on this challenge, this article studies a variable-size cooperative coevolutionary particle swarm optimization algorithm (VS-CCPSO) for FS. The proposed algorithm employs the idea of “divide and conquer” in cooperative coevolutionary approach, but several new developed problem-guided operators/strategies make it more suitable for FS problems. First, a space division strategy based on the feature importance is presented, which can classify relevant features into the same subspace with a low computational cost. Following that, an adaptive adjustment mechanism of subswarm size is developed to maintain an appropriate size for each subswarm, with the purpose of saving computational cost on evaluating particles. Moreover, a particle deletion strategy based on fitness-guided binary clustering, and a particle generation strategy based on feature importance and crossover both are designed to ensure the quality of particles in the subswarms. We apply VS-CCPSO to 12 typical datasets and compare it with six state-of-the-art methods. The experimental results show that VS-CCPSO has the capability of obtaining good feature subsets, suggesting its competitiveness for tackling FS problems with high dimensionality.

Limit-Cycle-Based Mutant Multiobjective Pigeon-Inspired Optimization

This article presents a limit-cycle-based mutant multiobjective pigeon-inspired optimization (PIO). In this algorithm, the limit-cycle-based mechanism is devised to consider the factors that affect the flight of pigeons to simplify the multiobjective P…

This article presents a limit-cycle-based mutant multiobjective pigeon-inspired optimization (PIO). In this algorithm, the limit-cycle-based mechanism is devised to consider the factors that affect the flight of pigeons to simplify the multiobjective PIO algorithm. The mutant mechanism is incorporated to strengthen the exploration capability in the evolutionary process. Additionally, the application of the dual repository makes the nondominated solutions stored and selected to guide the flight of pigeons. Attributed to the limit-cycle-based mutant mechanisms, this algorithm not only obtains the faster convergence speed and higher accuracy but also improves its population diversity. To confirm the universal application of this algorithm, theoretical analysis of the convergence is discussed in this article. Finally, comparative experiments of our proposed algorithm and other five multiobjective methods are conducted to verify the accuracy, efficiency, and convergence stability of the proposed algorithm.