A Kriging-Assisted Two-Archive Evolutionary Algorithm for Expensive Many-Objective Optimization

Only a small number of function evaluations can be afforded in many real-world multiobjective optimization problems (MOPs) where the function evaluations are economically/computationally expensive. Such problems pose great challenges to most existing m…

Only a small number of function evaluations can be afforded in many real-world multiobjective optimization problems (MOPs) where the function evaluations are economically/computationally expensive. Such problems pose great challenges to most existing multiobjective evolutionary algorithms (EAs), which require a large number of function evaluations for optimization. Surrogate-assisted EAs (SAEAs) have been employed to solve expensive MOPs. Specifically, a certain number of expensive function evaluations are used to build computationally cheap surrogate models for assisting the optimization process without conducting expensive function evaluations. The infill sampling criteria in most existing SAEAs take all requirements on convergence, diversity, and model uncertainty into account, which is, however, not the most efficient in exploiting the limited computational budget. Thus, this article proposes a Kriging-assisted two-archive EA for expensive many-objective optimization. The proposed algorithm uses one influential point-insensitive model to approximate each objective function. Moreover, an adaptive infill criterion that identifies the most important requirement on convergence, diversity, or uncertainty is proposed to determine an appropriate sampling strategy for reevaluations using the expensive objective functions. The experimental results on a set of expensive multi/many-objective test problems have demonstrated its superiority over five state-of-the-art SAEAs.

A Survey of Normalization Methods in Multiobjective Evolutionary Algorithms

A real-world multiobjective optimization problem (MOP) usually has differently scaled objectives. Objective space normalization has been widely used in multiobjective optimization evolutionary algorithms (MOEAs). Without objective space normalization, …

A real-world multiobjective optimization problem (MOP) usually has differently scaled objectives. Objective space normalization has been widely used in multiobjective optimization evolutionary algorithms (MOEAs). Without objective space normalization, most of the MOEAs may fail to obtain uniformly distributed and well-converged solutions on MOPs with differently scaled objectives. Objective space normalization requires information on the Pareto front (PF) range, which can be acquired from the ideal and nadir points. Since the ideal and nadir points of a real-world MOP are usually not known a priori, many recently proposed MOEAs tend to estimate and update the two points adaptively during the evolutionary process. Different methods to estimate ideal and nadir points have been proposed in the literature. Due to inaccurate estimation of the two points (i.e., inaccurate estimation of the PF range), objective space normalization may deteriorate the performance of an MOEA. Different methods have also been proposed to alleviate the negative effects of inaccurate estimation. This article presents a comprehensive survey of objective space normalization methods, including ideal point estimation methods, nadir point estimation methods, and different methods based on the utilization of the estimated PF range.

Multitree Genetic Programming With New Operators for Transfer Learning in Symbolic Regression With Incomplete Data

Lack of knowledge is a common consequence of data incompleteness when learning from real-world data. To deal with such a situation, this work utilizes transfer learning (TL) to reuse knowledge from different (yet related) but complete domains. Due to i…

Lack of knowledge is a common consequence of data incompleteness when learning from real-world data. To deal with such a situation, this work utilizes transfer learning (TL) to reuse knowledge from different (yet related) but complete domains. Due to its powerful feature construction ability, genetic programming (GP) is used to construct feature-based transformations that map the feature space of the source domain to that of the target domain such that their differences are reduced. Particularly, this work proposes a new multitree GP-based feature construction approach to TL in symbolic regression with missing values. It transfers knowledge related to the importance of the features and instances in the source domain to the target domain to improve the learning performance. Moreover, new genetic operators are developed to encourage minimizing the distribution discrepancy between the transformed domain and the target domain. A new probabilistic crossover is developed to make the well-constructed trees in the individuals more likely to be mated than the other trees. A new mutation operator is designed to give more probability for the poorly constructed trees to be mutated. The experimental results show that the proposed method not only achieves better performance compared with different traditional learning methods but also advances two recent TL methods on real-world data sets with various incompleteness and learning scenarios.

Autoencoding With a Classifier System

Autoencoders are data-specific compression algorithms learned automatically from examples. The predominant approach has been to construct single large global models that cover the domain. However, training and evaluating models of increasing size comes…

Autoencoders are data-specific compression algorithms learned automatically from examples. The predominant approach has been to construct single large global models that cover the domain. However, training and evaluating models of increasing size comes at the price of additional time and computational cost. Conditional computation, sparsity, and model pruning techniques can reduce these costs while maintaining performance. Learning classifier systems (LCSs) are a framework for adaptively subdividing input spaces into an ensemble of simpler local approximations that together cover the domain. LCS perform conditional computation through the use of a population of individual gating/guarding components, each associated with a local approximation. This article explores the use of an LCS to adaptively decompose the input domain into a collection of small autoencoders, where local solutions of different complexity may emerge. In addition to the benefits in convergence time and computational cost, it is shown possible to reduce the code size as well as the resulting decoder computational cost when compared with the global model equivalent.

Weighted Indicator-Based Evolutionary Algorithm for Multimodal Multiobjective Optimization

Multimodal multiobjective problems (MMOPs) arise frequently in the real world, in which multiple Pareto-optimal solution (PS) sets correspond to the same point on the Pareto front. Traditional multiobjective evolutionary algorithms (MOEAs) show poor pe…

Multimodal multiobjective problems (MMOPs) arise frequently in the real world, in which multiple Pareto-optimal solution (PS) sets correspond to the same point on the Pareto front. Traditional multiobjective evolutionary algorithms (MOEAs) show poor performance in solving MMOPs due to a lack of diversity maintenance in the decision space. Thus, recently, many multimodal MOEAs (MMEAs) have been proposed. However, for most existing MMEAs, the convergence performance in the objective space does not meet expectations. In addition, many of them cannot always obtain all equivalent Pareto solution sets. To address these issues, this study proposes an MMEA based on a weighted indicator, termed MMEA-WI. The algorithm integrates the diversity information of solutions in the decision space into an objective space performance indicator to maintain the diversity in the decision space and introduces a convergence archive to ensure a more effective approximation of the Pareto-optimal front (PF). These strategies can readily be applied to other indicator-based MOEAs. The experimental results show that MMEA-WI outperforms some state-of-the-art MMEAs on the chosen benchmark problems in terms of the inverted generational distance (IGD) and IGD in the decision space (IGDX) metrics.

Partial Evaluation Strategies for Expensive Evolutionary Constrained Optimization

Constrained optimization problems (COPs) are frequently encountered in real-world design applications. For some COPs, the evaluation of the objective(s) and/or constraint(s) may involve significant computational/temporal/financial cost. Such problems a…

Constrained optimization problems (COPs) are frequently encountered in real-world design applications. For some COPs, the evaluation of the objective(s) and/or constraint(s) may involve significant computational/temporal/financial cost. Such problems are referred to as expensive COPs (ECOPs). Surrogate modeling has been widely used in conjunction with optimization methods for such problems, wherein the search is partially driven by an approximate function instead of true expensive evaluations. However, for any true evaluation, nearly all existing methods compute all objective and constraint values together as one batch. Such full evaluation approaches may be inefficient for cases where the objective/constraint(s) can be evaluated independently of each other. In this article, we propose and study a constraint handling strategy for ECOPs using partial evaluations. The constraints are evaluated in a sequence determined based on their likelihood of being violated; and the evaluation is aborted if a constraint violation is encountered. Modified ranking strategies are introduced to effectively rank the solutions using the limited information thus obtained, while saving on significant function evaluations. The proposed algorithm is compared with a number of its variants to establish the utility of its key components systematically. Numerical experiments and benchmarking are conducted on a range of mathematical and engineering design problems to demonstrate the efficacy of the approach compared to state-of-the-art evolutionary optimization approaches.

Identifying Influential Spreaders in Social Networks Through Discrete Moth-Flame Optimization

Influence maximization in a social network refers to the selection of node sets that support the fastest and broadest propagation of information under a chosen transmission model. The efficient identification of such influence-maximizing groups is an a…

Influence maximization in a social network refers to the selection of node sets that support the fastest and broadest propagation of information under a chosen transmission model. The efficient identification of such influence-maximizing groups is an active area of research with diverse practical relevance. Greedy-based methods can provide solutions of reliable accuracy, but the computational cost of the required Monte Carlo simulations renders them infeasible for large networks. Meanwhile, although network structure-based centrality methods can be efficient, they typically achieve poor recognition accuracy. Here, we establish an effective influence assessment model based both on the total valuation and variance in valuation of neighbor nodes, motivated by the possibility of unreliable communication channels. We then develop a discrete moth-flame optimization method to search for influence-maximizing node sets, using a local crossover and mutation evolution scheme atop the canonical moth position updates. To accelerate convergence, a search area selection scheme derived from a degree-based heuristic is used. The experimental results on five real-world social networks, comparing our proposed method against several alternatives in the current literature, indicates our approach to be effective and robust in tackling the influence maximization problem.

Set Theory-Based Operator Design in Evolutionary Algorithms for Solving Knapsack Problems

Knapsack problems (KPs) are famous combinatorial optimization problems that can be solved by evolutionary algorithms (EAs). In such methods, a key step is to produce new solutions for each generation. However, traditional EAs cannot guarantee the feasi…

Knapsack problems (KPs) are famous combinatorial optimization problems that can be solved by evolutionary algorithms (EAs). In such methods, a key step is to produce new solutions for each generation. However, traditional EAs cannot guarantee the feasibility of the new solutions, causing ineffectiveness or inefficiency of the methods. Owing to the fact that directly generating a feasible solution is difficult, a practical way is to generate a potential solution and transform it to a feasible one if necessary; more ideally, transform it to the local-optimal solution. Essentially, this transforming process is to map an element of the solution set to an element of the feasible solution set, which can be analyzed and optimized from a new perspective, i.e., the set theory (ST). In this article, we provide new explanations for the transforming process of solutions based on ST and summarize the properties that the transforming process should satisfy. Furthermore, based on the proposed concepts and theories, we put forward the ideas for improving the transforming process. Consequently, some new operators for KPs are proposed. Experimental results demonstrate the superiority of the proposed operators.