A Scalable Indicator-Based Evolutionary Algorithm for Large-Scale Multiobjective Optimization

The performance of traditional multiobjective evolutionary algorithms (MOEAs) often deteriorates rapidly as the number of decision variables increases. While some efforts were made to design new algorithms by adapting existing techniques to large-scale…

The performance of traditional multiobjective evolutionary algorithms (MOEAs) often deteriorates rapidly as the number of decision variables increases. While some efforts were made to design new algorithms by adapting existing techniques to large-scale single-objective optimization to the MOEA context, the specific difficulties that may arise from large-scale multiobjective optimization have rarely been studied. In this paper, the exclusive challenges along with the increase of the number of variables of a multiobjective optimization problem (MOP) are examined empirically, and the popular benchmarks are categorized into three groups accordingly. Problems in the first category only require MOEAs to have stronger convergence, and can thus be mitigated using techniques employed in large-scale single-objective optimization. Problems that require MOEAs to have stronger diversification but ignore a correlation between position and distance functions are grouped as the second. The rest of the problems that pose a great challenge to the balance between diversification and convergence by considering a correlation between position and distance functions are grouped as the third. While existing large-scale MOEAs perform well on the problems in the first two categories, they suffer a significant loss when applied to those in the third category. To solve large-scale MOPs in this category, we have developed a novel indicator-based algorithm with an enhanced diversification mechanism. The proposed algorithm incorporates a new solution generator with an external archive, thus forcing the search toward different subregions of the Pareto front using a dual local search mechanism. The results obtained by applying the proposed algorithm to a wide variety of problems (108 instances in total) with up to 8192 variables demonstrate that it outperforms eight state-of-the-art approaches on the examined problems in the third category and show its advantage in the balance between diversificat-
on and convergence.

Learning to Decompose: A Paradigm for Decomposition-Based Multiobjective Optimization

The decomposition-based evolutionary multiobjective optimization (EMO) algorithm has become an increasingly popular choice for a posteriori multiobjective optimization. However, recent studies have shown that their performance strongly depends on the P…

The decomposition-based evolutionary multiobjective optimization (EMO) algorithm has become an increasingly popular choice for a posteriori multiobjective optimization. However, recent studies have shown that their performance strongly depends on the Pareto front (PF) shapes. This can be attributed to the decomposition method, of which the reference points and subproblem formulation settings are not well adaptable to various problem characteristics. In this paper, we develop a learning-to-decompose (LTD) paradigm that adaptively sets the decomposition method by learning the characteristics of the estimated PF. Specifically, it consists of two interdependent parts, i.e., a learning module and an optimization module. Given the current nondominated solutions from the optimization module, the learning module periodically learns an analytical model of the estimated PF. Thereafter, useful information is extracted from the learned model to set the decomposition method for the optimization module: 1) reference points compliant with the PF shape and 2) subproblem formulations whose contours and search directions are appropriate for the current status. Accordingly, the optimization module, which can be any decomposition-based EMO algorithm in principle, decomposes the multiobjective optimization problem into a number of subproblems and optimizes them simultaneously. To validate our proposed LTD paradigm, we integrate it with two decomposition-based EMO algorithms, and compare them with four state-of-the-art algorithms on a series of benchmark problems with various PF shapes.

Evolutionary Many-Objective Optimization Based on Dynamical Decomposition

Decomposition-based many-objective evolutionary algorithms generally decompose the objective space into multiple subregions with the help of a set of reference vectors. The resulting subregions are fixed since the reference vectors are usually predefin…

Decomposition-based many-objective evolutionary algorithms generally decompose the objective space into multiple subregions with the help of a set of reference vectors. The resulting subregions are fixed since the reference vectors are usually predefined. When the optimization problem has a complicated Pareto front (PF), this decomposition may decrease the algorithm performance. To deal with this problem, this paper proposes a dynamical decomposition strategy. Instead of using predefined reference vectors, solution themselves are used as reference vectors. Thus, they are adapted to the shape of PF automatically. Besides, the subregions are produced one by one through successively bipartitioning the objective space. The resulting subregions are not fixed but dynamically determined by the population solutions as well as the subregions produced previously. Based on this strategy, a solution ranking method, named dynamical-decomposition-based ranking method (DDR), is proposed which can be employed in the mating selection and environmental selection in commonly used algorithm frameworks. Compared with those in the other decomposition-based algorithms, DDR has the following properties: 1) no predefined reference vectors are required; 2) less parameters are involved; and 3) the ranking results can not only be utilized directly to select solutions but also serve as a secondary criterion in traditional Pareto-based algorithms. In this paper, DDR is equipped in two algorithm frameworks for handling many-objective optimization problems. Comparisons with five state-of-the-art algorithms on 31 widely used test problems are carried out to test the performance of the proposed approach. The experimental results have shown the effectiveness of the proposed approach in keeping a good tradeoff between convergence and diversity.

Multiphase Balance of Diversity and Convergence in Multiobjective Optimization

In multiobjective optimization, defining a good solution is a multifactored process. Most existing evolutionary multi- or many-objective optimization (EMO) algorithms have utilized two factors: 1) domination and 2) crowding levels of each solution. Alt…

In multiobjective optimization, defining a good solution is a multifactored process. Most existing evolutionary multi- or many-objective optimization (EMO) algorithms have utilized two factors: 1) domination and 2) crowding levels of each solution. Although these two coarse-grained factors are found to be adequate in many EMO algorithms, their relative importance in an algorithm has been a matter of great concern to many current studies. We argue that beside these issues, other more fine-grained factors are of importance. For example, since extreme objective-wise solutions are important in establishing a noise-free and stable normalization process, reaching extreme solutions is more crucial than finding other solutions. In this paper, we propose an integrated algorithm, B-NSGA-III, that produces much better convergence and diversity preservation. For this purpose, in addition to emphasizing extreme objective-wise solutions, B-NSGA-III tries to find solutions near intermediate undiscovered regions of the front. B-NSGA-III addresses critical algorithmic issues of convergence and diversity-preservation directly through recent progresses in literature and integrates all these critical fine-grained factors seamlessly in an alternating phases scheme. The proposed algorithm is shown to perform better than a number of commonly used existing methods.