Space-Partitioned ND-Trees for the Dynamic Nondominance Problem

We present techniques for improving the efficiency of ND-trees in the solution of the dynamic nondominance problem, i.e., for building and maintaining a Pareto archive. We propose algorithms for (re)building a tree from a set of nondominated points, ei…

We present techniques for improving the efficiency of ND-trees in the solution of the dynamic nondominance problem, i.e., for building and maintaining a Pareto archive. We propose algorithms for (re)building a tree from a set of nondominated points, either as a space-partitioned ND-tree or by a clustering approach. Numerical experiments confirm that rebuilding the tree at intervals, combined with modified strategies for updating the lower and upper bounds in each node and for computing the distances of points to subtrees in between the rebuilds, can lead to significant speedups over using plain ND-tree-based updates throughout, in particular for higher dimensions.

Probabilistic Selection Approaches in Decomposition-Based Evolutionary Algorithms for Offline Data-Driven Multiobjective Optimization

In offline data-driven multiobjective optimization, no new data are available during the optimization process. Approximation models, also known as surrogates, are built using the provided offline data. A multiobjective evolutionary algorithm can be uti…

In offline data-driven multiobjective optimization, no new data are available during the optimization process. Approximation models, also known as surrogates, are built using the provided offline data. A multiobjective evolutionary algorithm can be utilized to find solutions by using these surrogates. The accuracy of the approximated solutions depends on the surrogates and approximations typically involve uncertainties. In this article, we propose probabilistic selection approaches that utilize the uncertainty information of the Kriging models (as surrogates) to improve the solution process in offline data-driven multiobjective optimization. These approaches are designed for decomposition-based multiobjective evolutionary algorithms and can, thus, handle a large number of objectives. The proposed approaches were tested on distance-based visualizable test problems and the DTLZ suite. The proposed approaches produced solutions with a greater hypervolume, and a lower root mean squared error compared to generic approaches and a transfer learning approach that do not use uncertainty information.

Recombinator-<italic>k</italic>-Means: An Evolutionary Algorithm That Exploits <italic>k</italic>-Means++ for Recombination

We introduce an evolutionary algorithm called recombinator- $k$ -means for optimizing the highly nonconvex kmeans problem. Its defining feature is that its crossover step involves all the members of the current generation, stochastically recombining them with a repurposed variant of the $k$ -means++ seeding algorithm. The recombination also uses a reweighting mechanism that realizes a progressively sharper stochastic selection policy and ensures that the population eventually coalesces into a single solution. We compare this scheme with a state-of-the-art alternative, a more standard genetic algorithm with deterministic pairwise-nearest-neighbor crossover and an elitist selection policy, of which we also provide an augmented and efficient implementation. Extensive tests on large and challenging datasets (both synthetic and real word) show that for fixed population sizes recombinator- $k$ -means is generally superior in terms of the optimization objective, at the cost of a more expensive crossover step. When adjusting the population sizes of the two algorithms to match their running times, we find that for short times the (augmented) pairwise-nearest-neighbor method is always superior, while at longer times recombinator- $k$ -means will match it and, on the most difficult examples, take over. We conclude that the reweighted whole-population recombination is more costly but generally better at escaping local minima. Moreover, it is algorithmically simpler and more general (it could be applied even to $k$ -medians or $k$ ne-formula>-medoids, for example). Our implementations are publicly available at https://github.com/carlobaldassi/RecombinatorKMeans.jl.

We introduce an evolutionary algorithm called recombinator- $k$ -means for optimizing the highly nonconvex kmeans problem. Its defining feature is that its crossover step involves all the members of the current generation, stochastically recombining them with a repurposed variant of the $k$ -means++ seeding algorithm. The recombination also uses a reweighting mechanism that realizes a progressively sharper stochastic selection policy and ensures that the population eventually coalesces into a single solution. We compare this scheme with a state-of-the-art alternative, a more standard genetic algorithm with deterministic pairwise-nearest-neighbor crossover and an elitist selection policy, of which we also provide an augmented and efficient implementation. Extensive tests on large and challenging datasets (both synthetic and real word) show that for fixed population sizes recombinator- $k$ -means is generally superior in terms of the optimization objective, at the cost of a more expensive crossover step. When adjusting the population sizes of the two algorithms to match their running times, we find that for short times the (augmented) pairwise-nearest-neighbor method is always superior, while at longer times recombinator- $k$ -means will match it and, on the most difficult examples, take over. We conclude that the reweighted whole-population recombination is more costly but generally better at escaping local minima. Moreover, it is algorithmically simpler and more general (it could be applied even to $k$ -medians or $k$ -medoids, for example). Our implementations are publicly available at https://github.com/carlobaldassi/RecombinatorKMeans.jl.

Expensive Multiobjective Optimization by Relation Learning and Prediction

Expensive multiobjective optimization problems pose great challenges to evolutionary algorithms due to their costly evaluation. Building cheap surrogate models to replace the expensive real models has been proved to be a practical way to reduce the num…

Expensive multiobjective optimization problems pose great challenges to evolutionary algorithms due to their costly evaluation. Building cheap surrogate models to replace the expensive real models has been proved to be a practical way to reduce the number of costly evaluations. Supervised learning techniques from the community of machine learning have been widely applied to build either regressors, which approximate the fitness values of candidate solutions, or classifiers, which estimate the categories of candidate solutions. Considering the characteristics of the data produced in optimization, this article proposes a new surrogate model, called a relation model, for evolutionary multiobjective optimization. Instead of estimating the qualities of candidate solutions directly, the relation model tries to estimate the relationship between a pair of solutions $langle mathbf {x}, mathbf {y}rangle $ , i.e., $mathbf {x}$ dominates $mathbf {y}$ , $mathbf {x}$ is dominated by $mathbf {y}$ , or $mathbf {x}$ is nondominated with $mathbf {y}$ in the case of multiobjective optimization. To implement this idea, first a balanced training set is prepared, then a classifier is built based on the training data set to learn the relationship, and finally, the classifier with a voting-scoring strategy is applied to estimate the relationship between the candidate solutions and parent solutions. By this way, the promising candidate solutions are recognized and evaluated by the –
eal models. The new approach is applied to three well-known benchmark suites and two real-world applications, and the experimental results suggest that the proposed method outperforms some state-of-the-art methods based on regression and classification models on the given instances.

An Approximated Gradient Sign Method Using Differential Evolution for Black-Box Adversarial Attack

Recent studies show that deep neural networks are vulnerable to adversarial attacks in the form of subtle perturbations to the input image, which leads the model to output wrong prediction. Such an attack can easily succeed by the existing white-box at…

Recent studies show that deep neural networks are vulnerable to adversarial attacks in the form of subtle perturbations to the input image, which leads the model to output wrong prediction. Such an attack can easily succeed by the existing white-box attack methods, where the perturbation is calculated based on the gradient of the target network. Unfortunately, the gradient is often unavailable in the real-world scenarios, which makes the black-box adversarial attack problems practical and challenging. In fact, they can be formulated as high-dimensional black-box optimization problems at the pixel level. Although evolutionary algorithms are well known for solving black-box optimization problems, they cannot efficiently deal with the high-dimensional decision space. Therefore, we propose an approximated gradient sign method using differential evolution (DE) for solving black-box adversarial attack problems. Unlike most existing methods, it is novel that the proposed method searches the gradient sign rather than the perturbation by a DE algorithm. Also, we transform the pixel-based decision space into a dimension-reduced decision space by combining the pixel differences from the input image to neighbor images, and two different techniques for selecting neighbor images are introduced to build the transferred decision space. In addition, six variants of the proposed method are designed according to the different neighborhood selection and optimization search strategies. Finally, the performance of the proposed method is compared with a number of the state-of-the-art adversarial attack algorithms on CIFAR-10 and ImageNet datasets. The experimental results suggest that the proposed method shows superior performance for solving black-box adversarial attack problems, especially nontargeted attack problems.

A Novel Dual-Stage Dual-Population Evolutionary Algorithm for Constrained Multiobjective Optimization

In addition to the search for feasible solutions, the utilization of informative infeasible solutions is important for solving constrained multiobjective optimization problems (CMOPs). However, most of the existing constrained multiobjective evolutiona…

In addition to the search for feasible solutions, the utilization of informative infeasible solutions is important for solving constrained multiobjective optimization problems (CMOPs). However, most of the existing constrained multiobjective evolutionary algorithms (CMOEAs) cannot effectively explore and exploit those solutions and, therefore, exhibit poor performance when facing problems with large infeasible regions. To address the issue, this article proposes a novel method, called DD-CMOEA, which features dual stages (i.e., exploration and exploitation) and dual populations. Specifically, the two populations, called mainPop and auxPop, first individually evolve with and without considering the constraints, responsible for exploring feasible and infeasible solutions, respectively. Then, in the exploitation stage, mainPop provides information about the location of feasible regions, which facilitates auxPop to find and exploit surrounding infeasible solutions. The promising infeasible solutions obtained by auxPop in turn help mainPop converge better toward the Pareto-optimal front. Extensive experiments on three well-known test suites and a real-world case study fully demonstrate that DD-CMOEA is more competitive than five state-of-the-art CMOEAs.

Enhanced <italic>Innovized</italic> Progress Operator for Evolutionary Multi- and Many-Objective Optimization

Innovization is a task of learning common relationships among some or all of the Pareto-optimal (PO) solutions in multi- and many-objective optimization problems. A recent study has shown that a chronological sequence of nondominated solutions obtained…

Innovization is a task of learning common relationships among some or all of the Pareto-optimal (PO) solutions in multi- and many-objective optimization problems. A recent study has shown that a chronological sequence of nondominated solutions obtained along the successive generations of an optimizer possesses salient patterns that can be learnt using a Machine Learning (ML) model, and can help the offspring solutions progress in useful directions. This article enhances each constitutive module of the above approach, including novel interventions on management of the convergence-diversity tradeoff while mapping the solutions from the previous and current generation; use of a computationally more efficient ML method, namely, Random Forest (RF); and changing the manner and extent to which the learnt ML model is utilized toward advancement of the offspring. The proposed modules constitute what is called the enhanced innovized progress (IP2) operator. To investigate the search efficacy provided by the IP2 operator, it is integrated with multi-and many-objective optimization algorithms, such as NSGA-II, NSGA-III, MOEA/D, and MaOEA-IGD, and tested on a range of two- to ten-objective test problems, and five real-world problems. Since the IP2 operator utilizes the history of gradual and progressive improvements in solutions over generations, without requiring any additional solution evaluations, it opens up a new direction for ML-assisted evolutionary optimization.

Reducing Negative Transfer Learning via Clustering for Dynamic Multiobjective Optimization

Dynamic multiobjective optimization problems (DMOPs) aim to optimize multiple (often conflicting) objectives that are changing over time. Recently, there are a number of promising algorithms proposed based on transfer learning methods to solve DMOPs. H…

Dynamic multiobjective optimization problems (DMOPs) aim to optimize multiple (often conflicting) objectives that are changing over time. Recently, there are a number of promising algorithms proposed based on transfer learning methods to solve DMOPs. However, it is very challenging to reduce the negative effect in transfer learning and find more effective transferred solutions. To fill this research gap, this article proposes a clustering-based transfer (CBT) learning method to solve DMOPs. When the environment changes, two novel operations (clustering-based selection (CBS) and CBT) are used to guide knowledge transfer. Specifically, CBS aims to find a population with nondominated solutions and dominated solutions as the training data for the new environment. Then, CBT further collects the previous Pareto-optimal solutions and some noise solutions as the training data for the previous environment. Two training data sets from different environments are, respectively, divided into multiple clusters and transfer learning is conducted on two similar clusters with high probability to reduce the negative effect, which can train an accurate prediction model to identify the promising solutions for the new environment. Empirical studies have been conducted on 14 benchmark DMOPs and one real-life path planning problem of unmanned air/ground vehicles, which validate the effectiveness of our proposed method. Especially, our method can significantly reduce negative transfer on 12 out of 14 cases when compared with direct transfer learning.

A Review on Evolutionary Multitask Optimization: Trends and Challenges

Evolutionary algorithms (EAs) possess strong problem-solving abilities and have been applied in a wide range of applications. However, they still suffer from a high computational burden and poor generalization ability. To overcome the limitations, nume…

Evolutionary algorithms (EAs) possess strong problem-solving abilities and have been applied in a wide range of applications. However, they still suffer from a high computational burden and poor generalization ability. To overcome the limitations, numerous studies consider conducting knowledge extraction across distinct optimization task domains. Among these research strands, one representative tributary is evolutionary multitask optimization (EMTO) that aims to resolve multiple optimization tasks simultaneously. The underlying attribute of implicit parallelism for EAs can well incorporate with the framework of EMTO, giving rise to the ascending EMTO studies. This review is intended to present a detailed exposition on the research in the EMTO area. We reveal the core components for designing the EMTO algorithms. Subsequently, we organize the works lying in the fusions between EMTO and traditional EAs. By analyzing the associations for diverse strategies in different branches of EMTO, this review uncovers the research trends and the potentially important directions, with additional interesting real-world applications mentioned.

Evolutionary Search for Complete Neural Network Architectures With Partial Weight Sharing

Neural architecture search (NAS) provides an automatic solution in designing network architectures. Unfortunately, the direct search for complete task-dependent network architectures is laborious since training and evaluating complete neural architectu…

Neural architecture search (NAS) provides an automatic solution in designing network architectures. Unfortunately, the direct search for complete task-dependent network architectures is laborious since training and evaluating complete neural architectures over a large search space are computationally prohibitive. Recently, one-shot NAS (OSNAS) has attracted great attention in the NAS community because it significantly speeds up the candidate architecture evaluation procedure through weight sharing. However, the full weight sharing training paradigm in OSNAS may result in strong interference across candidate architectures and mislead the architecture search. To alleviate the problem, we propose a partial weight sharing OSNAS framework that directly evolves complete neural network architectures. In particular, we suggest a novel node representation scheme that randomly activates a subset of nodes of the one-shot model in each generation to reduce the weight coupling in the one-shot model. During the evolutionary search, a tailored crossover operator randomly samples the nodes from two parent individuals or a single parent to construct new candidate architectures, thus effectively constraining the degree of weight sharing. Furthermore, we introduce a new mutation operator that replaces the chosen nodes of the one-shot model with randomly generated nodes to enhance the exploratory capability. Finally, we encode a set of pyramidal convolution operations in the search space, enabling the evolved neural networks to capture different levels of details in the images. The proposed method is examined and compared with 26 state-of-the-art algorithms on ten image classification tasks, including CIFAR series, CINIC10, ImageNet, and MedMNIST series. The experimental results demonstrate that the proposed method can computationally much more efficiently find neural architectures that achieve comparable classification accuracy to the state-of-the-art designs.