Surrogate-Assisted Evolutionary Multitask Genetic Programming for Dynamic Flexible Job Shop Scheduling

Dynamic flexible job shop scheduling (JSS) is an important combinatorial optimization problem with complex routing and sequencing decisions under dynamic environments. Genetic programming (GP), as a hyperheuristic approach, has been successfully applie…

Dynamic flexible job shop scheduling (JSS) is an important combinatorial optimization problem with complex routing and sequencing decisions under dynamic environments. Genetic programming (GP), as a hyperheuristic approach, has been successfully applied to evolve scheduling heuristics for JSS. However, its training process is time consuming, and it faces the retraining problem once the characteristics of job shop scenarios vary. It is known that multitask learning is a promising paradigm for solving multiple tasks simultaneously by sharing knowledge among the tasks. To improve the training efficiency and effectiveness, this article proposes a novel surrogate-assisted evolutionary multitask algorithm via GP to share useful knowledge between different scheduling tasks. Specifically, we employ the phenotypic characterization for measuring the behaviors of scheduling rules and building a surrogate for each task accordingly. The built surrogates are used not only to improve the efficiency of solving each single task but also for knowledge transfer in multitask learning with a large number of promising individuals. The results show that the proposed algorithm can significantly improve the quality of scheduling heuristics for all scenarios. In addition, the proposed algorithm manages to solve multiple tasks collaboratively in terms of the evolved scheduling heuristics for different tasks in a multitask scenario.

On the Effect of the Cooperation of Indicator-Based Multiobjective Evolutionary Algorithms

For almost 20 years, quality indicators (QIs) have promoted the design of new selection mechanisms of multiobjective evolutionary algorithms (MOEAs). Each indicator-based MOEA (IB-MOEA) has specific search preferences related to its baseline QI, produc…

For almost 20 years, quality indicators (QIs) have promoted the design of new selection mechanisms of multiobjective evolutionary algorithms (MOEAs). Each indicator-based MOEA (IB-MOEA) has specific search preferences related to its baseline QI, producing Pareto front approximations with different properties. In consequence, an IB-MOEA based on a single QI has a limited scope of multiobjective optimization problems (MOPs) in which it is expected to have a good performance. This issue is emphasized when the associated Pareto front geometries are highly irregular. In order to overcome these issues, we propose here an island-based multiindicator algorithm (IMIA) that takes advantage of the search biases of multiple IB-MOEAs through a cooperative scheme. Our experimental results show that the cooperation of multiple IB-MOEAs allows IMIA to perform more robustly (considering several QIs) than the panmictic versions of its baseline IB-MOEAs as well as several state-of-the-art MOEAs. Additionally, IMIA shows a Pareto-front-shape invariance property, which makes it a remarkable optimizer when tackling MOPs with complex Pareto front geometries.

Learning Adaptive Differential Evolution Algorithm From Optimization Experiences by Policy Gradient

Differential evolution is one of the most prestigious population-based stochastic optimization algorithm for black-box problems. The performance of a differential evolution algorithm depends highly on its mutation and crossover strategy and associated …

Differential evolution is one of the most prestigious population-based stochastic optimization algorithm for black-box problems. The performance of a differential evolution algorithm depends highly on its mutation and crossover strategy and associated control parameters. However, the determination process for the most suitable parameter setting is troublesome and time consuming. Adaptive control parameter methods that can adapt to problem landscape and optimization environment are more preferable than fixed parameter settings. This article proposes a novel adaptive parameter control approach based on learning from the optimization experiences over a set of problems. In the approach, the parameter control is modeled as a finite-horizon Markov decision process. A reinforcement learning algorithm, named policy gradient, is applied to learn an agent (i.e., parameter controller) that can provide the control parameters of a proposed differential evolution adaptively during the search procedure. The differential evolution algorithm based on the learned agent is compared against nine well-known evolutionary algorithms on the CEC’13 and CEC’17 test suites. Experimental results show that the proposed algorithm performs competitively against these compared algorithms on the test suites.

Analysis of Evolutionary Algorithms on Fitness Function With Time-Linkage Property

In real-world applications, many optimization problems have the time-linkage property, that is, the objective function value relies on the current solution as well as the historical solutions. Although the rigorous theoretical analysis on evolutionary …

In real-world applications, many optimization problems have the time-linkage property, that is, the objective function value relies on the current solution as well as the historical solutions. Although the rigorous theoretical analysis on evolutionary algorithms (EAs) has rapidly developed in recent two decades, it remains an open problem to theoretically understand the behaviors of EAs on time-linkage problems. This article takes the first step to rigorously analyze EAs for time-linkage functions. Based on the basic OneMax function, we propose a time-linkage function where the first bit value of the last time step is integrated but has a different preference from the current first bit. We prove that with probability $1-o(1)$ , randomized local search and (1 + 1) EA cannot find the optimum, and with probability $1-o(1)$ , $(mu +1)$ EA is able to reach the optimum.

A Dual-Population-Based Evolutionary Algorithm for Constrained Multiobjective Optimization

The main challenge in constrained multiobjective optimization problems (CMOPs) is to appropriately balance convergence, diversity and feasibility. Their imbalance can easily cause the failure of a constrained multiobjective evolutionary algorithm (CMOE…

The main challenge in constrained multiobjective optimization problems (CMOPs) is to appropriately balance convergence, diversity and feasibility. Their imbalance can easily cause the failure of a constrained multiobjective evolutionary algorithm (CMOEA) in converging to the Pareto-optimal front with diverse feasible solutions. To address this challenge, we propose a dual-population-based evolutionary algorithm, named c-DPEA, for CMOPs. c-DPEA is a cooperative coevolutionary algorithm which maintains two collaborative and complementary populations, termed Population1 and Population2. In c-DPEA, a novel self-adaptive penalty function, termed saPF, is designed to preserve competitive infeasible solutions in Population1. On the other hand, infeasible solutions in Population2 are handled using a feasibility-oriented approach. To maintain an appropriate balance between convergence and diversity in c-DPEA, a new adaptive fitness function, named bCAD, is developed. Extensive experiments on three popular test suites comprehensively validate the design components of c-DPEA. Comparison against six state-of-the-art CMOEAs demonstrates that c-DPEA is significantly superior or comparable to the contender algorithms on most of the test problems.

Large-Scale Evolutionary Multiobjective Optimization Assisted by Directed Sampling

It is particularly challenging for evolutionary algorithms to quickly converge to the Pareto front in large-scale multiobjective optimization. To tackle this problem, this article proposes a large-scale multiobjective evolutionary algorithm assisted by…

It is particularly challenging for evolutionary algorithms to quickly converge to the Pareto front in large-scale multiobjective optimization. To tackle this problem, this article proposes a large-scale multiobjective evolutionary algorithm assisted by some selected individuals generated by directed sampling (DS). At each generation, a set of individuals closer to the ideal point is chosen for performing a DS in the decision space, and those nondominated ones of the sampled solutions are used to assist the reproduction to improve the convergence in evolutionary large-scale multiobjective optimization. In addition, elitist nondominated sorting is adopted complementarily for environmental selection with a reference vector-based method in order to maintain diversity of the population. Our experimental results show that the proposed algorithm is highly competitive on large-scale multiobjective optimization test problems with up to 5000 decision variables compared to five state-of-the-art multiobjective evolutionary algorithms.

Multiple Penalties and Multiple Local Surrogates for Expensive Constrained Optimization

This article proposes an evolutionary algorithm using multiple penalties and multiple local surrogates (MPMLS) for expensive constrained optimization. In each generation, MPMLS defines and optimizes a number of subproblems. Each subproblem penalizes th…

This article proposes an evolutionary algorithm using multiple penalties and multiple local surrogates (MPMLS) for expensive constrained optimization. In each generation, MPMLS defines and optimizes a number of subproblems. Each subproblem penalizes the constraints in the original problem using a different penalty coefficient and has its own search subregion. A local surrogate is built for optimizing each subproblem. Two major advantages of MPMLS are: 1) it can maintain good population diversity so that the search can approach the optimal solution of the original problem from different directions and 2) it only needs to build local surrogates so that the computational overhead of the model building can be reduced. Numerical experiments demonstrate that our proposed algorithm performs much better than some other state-of-the-art evolutionary algorithms.

Two-Stage Double Niched Evolution Strategy for Multimodal Multiobjective Optimization

In recent years, numerous efficient and effective multimodal multiobjective evolutionary algorithms (MMOEAs) have been developed to search for multiple equivalent sets of Pareto optimal solutions simultaneously. However, some of the MMOEAs prefer conve…

In recent years, numerous efficient and effective multimodal multiobjective evolutionary algorithms (MMOEAs) have been developed to search for multiple equivalent sets of Pareto optimal solutions simultaneously. However, some of the MMOEAs prefer convergent individuals over diversified individuals to construct the mating pool, and the individuals with slightly better decision space distribution may be replaced by significantly better objective space distribution. Therefore, the diversity in the decision space may become deteriorated, in spite of the decision and objective diversities have been taken into account simultaneously in most MMOEAs. Because the Pareto optimal subsets may have various shapes and locations in the decision space, it is very difficult to drive the individuals converged to every Pareto subregion with a uniform density. Some of the Pareto subregions may be overly crowded, while others are rather sparsely distributed. Consequently, many existing MMOEAs obtain Pareto subregions with imbalanced density. In this article, we present a two-stage double niched evolution strategy, namely DN-MMOES, to search for the equivalent global Pareto optimal solutions which can address the above challenges effectively and efficiently. The proposed DN-MMOES solves the multimodal multiobjective optimization problem (MMOP) in two stages. The first stage adopts the niching strategy in the decision space, while the second stage adapts double niching strategy in both spaces. Moreover, an effective decision density self-adaptive strategy is designed for improving the imbalanced decision space density. The proposed algorithm is compared against eight state-of-the-art MMOEAs. The inverted generational distance union (IGDunion) performance indicator is proposed to fairly compare two competing MMOEAs as a whole. The experimental results show that DN-MMOES provides a better performance to search for the complete Pareto Subsets and Pareto Front on IDMP and CEC 2019 MMOPs test-
suite.

Dual-Surrogate-Assisted Cooperative Particle Swarm Optimization for Expensive Multimodal Problems

Various real-world applications can be classified as expensive multimodal optimization problems. When surrogate-assisted evolutionary algorithms (SAEAs) are employed to tackle these problems, they not only face a contradiction between the precision of …

Various real-world applications can be classified as expensive multimodal optimization problems. When surrogate-assisted evolutionary algorithms (SAEAs) are employed to tackle these problems, they not only face a contradiction between the precision of surrogate models and the cost of individual evaluations but also have the difficulty that surrogate models and problem modalities are hard to match. To address this issue, this article studies a dual-surrogate-assisted cooperative particle swarm optimization algorithm to seek multiple optimal solutions. A dual-population cooperative particle swarm optimizer is first developed to simultaneously explore/exploit multiple modalities. Following that, a modal-guided dual-layer cooperative surrogate model, which contains one upper global surrogate model and a group of lower local surrogate models, is constructed with the purpose of reducing the individual evaluation cost. Moreover, a hybrid strategy based on clustering and peak-valley is proposed to detect new modalities. Compared with five existing SAEAs and seven multimodal evolutionary algorithms, the proposed algorithm can simultaneously obtain multiple highly competitive optimal solutions at a low computational cost according to the experimental results of testing both 11 benchmark instances and the building energy conservation problem.