A Multimodel Prediction Method for Dynamic Multiobjective Evolutionary Optimization

A large number of prediction strategies are specific to a dynamic multiobjective optimization problem (DMOP) with only one type of the Pareto set (PS) change. However, a continuous DMOP with more than one type of the unknown PS change has been seldom i…

A large number of prediction strategies are specific to a dynamic multiobjective optimization problem (DMOP) with only one type of the Pareto set (PS) change. However, a continuous DMOP with more than one type of the unknown PS change has been seldom investigated. We present a multimodel prediction approach (MMP) realized in the framework of evolutionary algorithms (EAs) to tackle the problem. In this paper, we first detect the type of the PS change, followed by the selection of an appropriate prediction model to provide an initial population for the subsequent evolution. To observe the influence of MMP on EAs, optimal solutions obtained by three classical dynamic multiobjective EAs with and without MMP are investigated. Furthermore, to investigate the performance of MMP, three state-of-the-art prediction strategies are compared on a large number of dynamic test instances under the same particle swarm optimizer. The experimental results demonstrate that the proposed approach outperforms its counterparts under comparison on most optimization problems.

An Evolutionary Algorithm for Large-Scale Sparse Multiobjective Optimization Problems

In the last two decades, a variety of different types of multiobjective optimization problems (MOPs) have been extensively investigated in the evolutionary computation community. However, most existing evolutionary algorithms encounter difficulties in …

In the last two decades, a variety of different types of multiobjective optimization problems (MOPs) have been extensively investigated in the evolutionary computation community. However, most existing evolutionary algorithms encounter difficulties in dealing with MOPs whose Pareto optimal solutions are sparse (i.e., most decision variables of the optimal solutions are zero), especially when the number of decision variables is large. Such large-scale sparse MOPs exist in a wide range of applications, for example, feature selection that aims to find a small subset of features from a large number of candidate features, or structure optimization of neural networks whose connections are sparse to alleviate overfitting. This paper proposes an evolutionary algorithm for solving large-scale sparse MOPs. The proposed algorithm suggests a new population initialization strategy and genetic operators by taking the sparse nature of the Pareto optimal solutions into consideration, to ensure the sparsity of the generated solutions. Moreover, this paper also designs a test suite to assess the performance of the proposed algorithm for large-scale sparse MOPs. The experimental results on the proposed test suite and four application examples demonstrate the superiority of the proposed algorithm over seven existing algorithms in solving large-scale sparse MOPs.

An Experimental Method to Estimate Running Time of Evolutionary Algorithms for Continuous Optimization

Running time analysis is a fundamental problem of critical importance in evolutionary computation. However, the analysis results have rarely been applied to advanced evolutionary algorithms (EAs) in practice, let alone their variants for continuous opt…

Running time analysis is a fundamental problem of critical importance in evolutionary computation. However, the analysis results have rarely been applied to advanced evolutionary algorithms (EAs) in practice, let alone their variants for continuous optimization. In this paper, an experimental method is proposed for analyzing the running time of EAs that are widely used for solving continuous optimization problems. Based on Glivenko–Cantelli theorem, the proposed method simulates the distribution of gain, which is introduced by average gain model to characterize progress during the optimization process. Data fitting techniques are subsequently adopted to obtain a desired function for further analyses. To verify the validity of the proposed method, experiments were conducted to estimate the upper bounds on expected first hitting time of various evolutionary strategies, such as (1, $lambda $ ) evolution strategy, standard evolution strategy, covariance matrix adaptation evolution strategy, and its improved variants. The results suggest that all estimated upper bounds are correct. Backed up by the proposed method, state-of-the-art EAs for continuous optimization will have identical results about the running time as simplified schemes, which will bridge the gap between theoretical foundation and applications of evolutionary computation.

Surrogate-Assisted Evolutionary Deep Learning Using an End-to-End Random Forest-Based Performance Predictor

Convolutional neural networks (CNNs) have shown remarkable performance in various real-world applications. Unfortunately, the promising performance of CNNs can be achieved only when their architectures are optimally constructed. The architectures of st…

Convolutional neural networks (CNNs) have shown remarkable performance in various real-world applications. Unfortunately, the promising performance of CNNs can be achieved only when their architectures are optimally constructed. The architectures of state-of-the-art CNNs are typically handcrafted with extensive expertise in both CNNs and the investigated data, which consequently hampers the widespread adoption of CNNs for less experienced users. Evolutionary deep learning (EDL) is able to automatically design the best CNN architectures without much expertise. However, the existing EDL algorithms generally evaluate the fitness of a new architecture by training from scratch, resulting in the prohibitive computational cost even operated on high-performance computers. In this paper, an end-to-end offline performance predictor based on the random forest is proposed to accelerate the fitness evaluation in EDL. The proposed performance predictor shows the promising performance in term of the classification accuracy and the consumed computational resources when compared with 18 state-of-the-art peer competitors by integrating into an existing EDL algorithm as a case study. The proposed performance predictor is also compared with the other two representatives of existing performance predictors. The experimental results show the proposed performance predictor not only significantly speeds up the fitness evaluations but also achieves the best prediction among the peer performance predictors.

Novel Prediction Strategies for Dynamic Multiobjective Optimization

This paper proposes a new prediction-based dynamic multiobjective optimization (PBDMO) method, which combines a new prediction-based reaction mechanism and a popular regularity model-based multiobjective estimation of distribution algorithm (RM-MEDA) f…

This paper proposes a new prediction-based dynamic multiobjective optimization (PBDMO) method, which combines a new prediction-based reaction mechanism and a popular regularity model-based multiobjective estimation of distribution algorithm (RM-MEDA) for solving dynamic multiobjective optimization problems. Whenever a change is detected, PBDMO reacts effectively to it by generating three subpopulations based on different strategies. The first subpopulation is created by moving nondominated individuals using a simple linear prediction model with different step sizes. The second subpopulation consists of some individuals generated by a novel sampling strategy to improve population convergence as well as distribution. The third subpopulation comprises some individuals generated using a shrinking strategy based on the probability distribution of variables. These subpopulations are tailored to form a population for the new environment. The experimental results carried out on a variety of bi- and three-objective benchmark functions demonstrate that the proposed technique has competitive performance compared with some state-of-the-art algorithms.

Decomposition-Based Interactive Evolutionary Algorithm for Multiple Objective Optimization

We propose a decomposition-based interactive evolutionary algorithm (EA) for multiple objective optimization. During an evolutionary search, a decision maker (DM) is asked to compare pairwise solutions from the current population. Using the Monte Carlo…

We propose a decomposition-based interactive evolutionary algorithm (EA) for multiple objective optimization. During an evolutionary search, a decision maker (DM) is asked to compare pairwise solutions from the current population. Using the Monte Carlo simulation, the proposed algorithm generates from a uniform distribution a set of instances of the preference model compatible with such an indirect preference information. These instances are incorporated as the search directions with the aim of systematically converging a population toward the DMs most preferred region of the Pareto front. The experimental comparison proves that the proposed decomposition-based method outperforms the state-of-the-art interactive counterparts of the dominance-based EAs. We also show that the quality of constructed solutions is highly affected by the form of the incorporated preference model.

Enhancing Decomposition-Based Algorithms by Estimation of Distribution for Constrained Optimal Software Product Selection

This paper integrates an estimation of distribution (EoD)-based update operator into decomposition-based multiobjective evolutionary algorithms for binary optimization. The probabilistic model in the update operator is a probability vector, which is ad…

This paper integrates an estimation of distribution (EoD)-based update operator into decomposition-based multiobjective evolutionary algorithms for binary optimization. The probabilistic model in the update operator is a probability vector, which is adaptively learned from historical information of each subproblem. We show that this update operator can significantly enhance decomposition-based algorithms on a number of benchmark problems. Moreover, we apply the enhanced algorithms to the constrained optimal software product selection (OSPS) problem in the field of search-based software engineering. For this real-world problem, we give its formal definition and then develop a new repair operator based on satisfiability solvers. It is demonstrated by the experimental results that the algorithms equipped with the EoD operator are effective in dealing with this practical problem, particularly for large-scale instances. The interdisciplinary studies in this paper provide a new real-world application scenario for constrained multiobjective binary optimizers and also offer valuable techniques for software engineers in handling the OSPS problem.

IEEE Transactions on Evolutionary Computation Society Information

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Evolutionary Development of Growing Generic Sorting Networks by Means of Rewriting Systems

This paper presents an evolutionary developmental method for the design of arbitrarily growing sorting networks. The developmental model is based on a parallel rewriting system (a grammar) that is specified by an alphabet, an initial string (an axiom),…

This paper presents an evolutionary developmental method for the design of arbitrarily growing sorting networks. The developmental model is based on a parallel rewriting system (a grammar) that is specified by an alphabet, an initial string (an axiom), and a set of rewriting rules. The rewriting process iteratively expands the axiom in order to develop more complex strings during a series of development steps (i.e., derivations in the grammar). A mapping function is introduced that allows for converting the strings onto comparator structures—building blocks of sorting networks. The construction of the networks is performed in such a way that a given (initial) sorting network grows progressively by adding further building blocks within each development step. For a given (fixed) alphabet, the axiom together with the rewriting rules themselves are the subjects of the evolutionary search. It will be shown that suitable grammars can be evolved for the construction of arbitrarily large sorting networks that grow with various given sizes of development steps. Moreover, the resulting networks exhibit significantly better properties (the number of comparators and delay) in comparison with those obtained by means of similar existing methods.

Efficient Generalized Surrogate-Assisted Evolutionary Algorithm for High-Dimensional Expensive Problems

Engineering optimization problems usually involve computationally expensive simulations and many design variables. Solving such problems in an efficient manner is still a major challenge. In this paper, a generalized surrogate-assisted evolutionary alg…

Engineering optimization problems usually involve computationally expensive simulations and many design variables. Solving such problems in an efficient manner is still a major challenge. In this paper, a generalized surrogate-assisted evolutionary algorithm is proposed to solve such high-dimensional expensive problems. The proposed algorithm is based on the optimization framework of the genetic algorithm (GA). This algorithm proposes to use a surrogate-based trust region local search method, a surrogate-guided GA (SGA) updating mechanism with a neighbor region partition strategy and a prescreening strategy based on the expected improvement infilling criterion of a simplified Kriging in the optimization process. The SGA updating mechanism is a special characteristic of the proposed algorithm. This mechanism makes a fusion between surrogates and the evolutionary algorithm. The neighbor region partition strategy effectively retains the diversity of the population. Moreover, multiple surrogates used in the SGA updating mechanism make the proposed algorithm optimize robustly. The proposed algorithm is validated by testing several high-dimensional numerical benchmark problems with dimensions varying from 30 to 100, and an overall comparison is made between the proposed algorithm and other optimization algorithms. The results show that the proposed algorithm is very efficient and promising for optimizing high-dimensional expensive problems.