A Novel Evolutionary Algorithm for Dynamic Constrained Multiobjective Optimization Problems

To promote research on dynamic constrained multiobjective optimization, we first propose a group of generic test problems with challenging characteristics, including different modes of the true Pareto front (e.g., convexity–concavity and connect…

To promote research on dynamic constrained multiobjective optimization, we first propose a group of generic test problems with challenging characteristics, including different modes of the true Pareto front (e.g., convexity–concavity and connectedness–disconnectedness) and the changing feasible region. Subsequently, motivated by the challenges presented by dynamism and constraints, we design a dynamic constrained multiobjective optimization algorithm with a nondominated solution selection operator, a mating selection strategy, a population selection operator, a change detection method, and a change response strategy. The designed nondominated solution selection operator can obtain a nondominated population with diversity when the environment changes. The mating selection strategy and population selection operator can adaptively handle infeasible solutions. If a change is detected, the proposed change response strategy reuses some portion of the old solutions in combination with randomly generated solutions to reinitialize the population, and a steady-state update method is designed to improve the retained previous solutions. The experimental results show that the proposed test problems can be used to clearly distinguish the performance of algorithms, and that the proposed algorithm is very competitive for solving dynamic constrained multiobjective optimization problems in comparison with state-of-the-art algorithms.

A Data-Driven Parallel Scheduling Approach for Multiple Agile Earth Observation Satellites

To address the large-scale and time-consuming multiple agile earth observation satellite (multi-AEOS) scheduling problems, this article proposes a data-driven parallel scheduling approach, which is composed of a probability prediction model, a task ass…

To address the large-scale and time-consuming multiple agile earth observation satellite (multi-AEOS) scheduling problems, this article proposes a data-driven parallel scheduling approach, which is composed of a probability prediction model, a task assignment strategy, and a parallel scheduling manner. In this approach, given the historical data of satellite scheduling, a prediction model is trained based on the cooperative neuro-evolution of augmenting topologies (C-NEAT) to predict the probabilities that a task will be fulfilled by different satellites. Driven by the probability prediction model, an assignment strategy is adopted for dividing the multi-AEOS scheduling problem into several single-AEOS scheduling subproblems, which can adaptively assign each task to the satellite with the highest predicted probability and greatly decrease the problem size. In a parallel manner, the single-AEOS scheduling subproblems are optimized, respectively, leading to an acceleration in the optimization efficiency of the original problem. Computational experiments indicate that the proposed approach presents better overall performance than other state-of-the-art methods within a very limited scheduling time. As the two main components of the proposed approach, the prediction model based on C-NEAT and the task assignment strategy also outperform other models with traditional training algorithms and inadaptive assignment strategies, respectively.

Surrogate-Assisted Robust Optimization of Large-Scale Networks Based on Graph Embedding

Robust optimization of complex networks has attracted much attention in recent years. Although existing methods have been successful in achieving promising results, the computational cost for robust optimization tasks is extremely high, which prevents …

Robust optimization of complex networks has attracted much attention in recent years. Although existing methods have been successful in achieving promising results, the computational cost for robust optimization tasks is extremely high, which prevents them from being further applied to large-scale networks. Thus, computationally efficient robust optimization methods are in high demand. This article proposes a low-cost method for estimating the robustness of networks with the help of graph embedding techniques and surrogate models. An evolutionary algorithm is then developed to find large-scale robust networks by combining the surrogate-assisted low-cost robustness estimator with the time-consuming real robustness measure by means of a model management strategy. The experimental results on different kinds of synthetic and real networks demonstrate the highly competitive search ability of the proposed algorithm. In addition, the algorithm is able to save up to 80% of the computation time for enhancing the robustness of large-scale networks compared with the state-of-the-art methods.

Efficacy of the Metropolis Algorithm for the Minimum-Weight Codeword Problem Using Codeword and Generator Search Spaces

This article studies the efficacy of the Metropolis algorithm for the minimum-weight codeword problem. The input is a linear code $C$ given by its generator matrix and our task is to compute a nonzero codeword in the code $C$ of least weight. In pa…

This article studies the efficacy of the Metropolis algorithm for the minimum-weight codeword problem. The input is a linear code $C$ given by its generator matrix and our task is to compute a nonzero codeword in the code $C$ of least weight. In particular, we study the Metropolis algorithm on two possible search spaces for the problem: 1) the codeword space and 2) the generator space. The former is the space of all codewords of the input code and is the most natural one to use and hence has been used in previous work on this problem. The latter is the space of all generator matrices of the input code and is studied for the first time in this article. In this article, we show that for an appropriately chosen temperature parameter the Metropolis algorithm mixes rapidly when either of the search spaces mentioned above are used. Experimentally, we demonstrate that the Metropolis algorithm performs favorably when compared to previous attempts. When using the generator space, the Metropolis algorithm is able to outperform the previous algorithms in most of the cases. We have also provided both theoretical and experimental justification to show why the generator space is a worthwhile search space to use for this problem.

IEEE Transactions on Evolutionary Computation Society Information

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Self-Adaptation in Nonelitist Evolutionary Algorithms on Discrete Problems With Unknown Structure

A key challenge to make effective use of evolutionary algorithms (EAs) is to choose appropriate settings for their parameters. However, the appropriate parameter setting generally depends on the structure of the optimization problem, which is often unk…

A key challenge to make effective use of evolutionary algorithms (EAs) is to choose appropriate settings for their parameters. However, the appropriate parameter setting generally depends on the structure of the optimization problem, which is often unknown to the user. Nondeterministic parameter control mechanisms adjust parameters using information obtained from the evolutionary process. Self-adaptation—where parameter settings are encoded in the chromosomes of individuals and evolve through mutation and crossover—is a popular parameter control mechanism in evolutionary strategies. However, there is little theoretical evidence that self-adaptation is effective, and self-adaptation has largely been ignored by the discrete evolutionary computation community. Here, we show through a theoretical runtime analysis that a nonelitist, discrete EA which self-adapts its mutation rate not only outperforms EAs which use static mutation rates on $mathrm{L{scriptstyle EADING}O{scriptstyle NES}}_{k}$ but also improves asymptotically on an EA using a state-of-the-art control mechanism. The structure of this problem depends on a parameter $k$ , which is a priori unknown to the algorithm, and which is needed to appropriately set a fixed mutation rate. The self-adaptive EA achieves the same asymptotic runtime as if this parameter was known to the algorithm beforehand, which is an asymptotic speedup for this problem compared to all other EAs previously studied. An experimental study of how the mutation-rates evolve show that they respond adequately to a diverse range of problem structures. These results suggest that self-adaptation should be adopted more broadly as a parameter control mechanism in discrete, nonelitist EAs.