A Many-Objective Evolutionary Algorithm With Two Interacting Processes: Cascade Clustering and Reference Point Incremental Learning

Researches have shown difficulties in obtaining proximity while maintaining diversity for many-objective optimization problems. Complexities of the true Pareto front pose challenges for the reference vector-based algorithms for their insufficient adapt…

Researches have shown difficulties in obtaining proximity while maintaining diversity for many-objective optimization problems. Complexities of the true Pareto front pose challenges for the reference vector-based algorithms for their insufficient adaptability to the diverse characteristics with no priori. This paper proposes a many-objective optimization algorithm with two interacting processes: cascade clustering and reference point incremental learning (CLIA). In the population selection process based on cascade clustering (CC), using the reference vectors provided by the process based on incremental learning, the nondominated and the dominated individuals are clustered and sorted with different manners in a cascade style and are selected by round-robin for better proximity and diversity. In the reference vector adaptation process based on reference point incremental learning, using the feedbacks from the process based on CC, proper distribution of reference points is gradually obtained by incremental learning. Experimental studies on several benchmark problems show that CLIA is competitive compared with the state-of-the-art algorithms and has impressive efficiency and versatility using only the interactions between the two processes without incurring extra evaluations.

A Generator for Multiobjective Test Problems With Difficult-to-Approximate Pareto Front Boundaries

In some real-world applications, it has been found that the performance of multiobjective optimization evolutionary algorithms (MOEAs) may deteriorate when boundary solutions in the Pareto front (PF) are more difficult to approximate than others. Such …

In some real-world applications, it has been found that the performance of multiobjective optimization evolutionary algorithms (MOEAs) may deteriorate when boundary solutions in the Pareto front (PF) are more difficult to approximate than others. Such a problem feature, referred to as difficult-to-approximate (DtA) PF boundaries, is seldom considered in existing multiobjective optimization test problems. To fill this gap and facilitate possible systematic studies, we introduce a new test problem generator. The proposed generator enables the design of test problems with controllable difficulties regarding the feature of DtA PF boundaries. Three representative MOEAs, NSGA-II, SMS-EMOA, and MOEA/D-DRA, are performed on a series of test problems created using the proposed generator. Experimental results indicate that all the three algorithms perform poorly on the new test problems. Meanwhile, a modified variant of MOEA/D-DRA, denoted as MOEA/D-DRA-UT, is validated to be more effective in dealing with these problems. Subsequently, it is concluded that the rational allocation of computational resources between different PF parts is crucial for MOEAs to handle the problems with DtA PF boundaries.

Impact of Communication Topology in Particle Swarm Optimization

Particle swarm optimization (PSO) has two salient components: 1) a dynamical rule governing particle motion and 2) an interparticle communication topology. Recent practice has focused on the fully connected topology (Gbest) despite earlier indications on the superiority of local particle neighborhoods. This paper seeks to address the controversy with empirical trials with canonical PSO on a large benchmark of functions, categorized into 14 properties. This paper confirms the early lore that Gbest is the overall better algorithm for unimodal and separable problems and that a ring neighborhood of connectivity two (Lbest) is the preferred choice for multimodal, nonseparable and composition functions. Topologies of intermediate particle connectivity were also tested and the difference in global/local performance was found to be even more marked. A measure of significant improvement is introduced in order to distinguish major improvements from refinements. Lbest, according to the experiments on the 84 test functions and a bi-modal problem of adjustable severity, is found to have significant improvements later in the run, and to be more diverse at termination. A mobility study shows that Lbest is better able to jump between optimum basins. Indeed Gbest was unable to switch basins in the bi-modal trial. The implication is that Lbest’s larger terminal diversity, its better ability to basin hop and its later significant improvement account for the performance enhancement. In several cases where Lbest was not the better algorithm, the trials show that Lbest was not stuck but would have continued to improve with an extended evaluation budget. Canonical PSO is a baseline algorithm and the ancestor of all contemporary PSO variants. These variants build on the basic structure of baseline PSO and the broad conclusions of this paper are expected to follow through. In particular, research that fails to consider local topologies risks underplaying the success of t-
e promoted algorithm.

Particle swarm optimization (PSO) has two salient components: 1) a dynamical rule governing particle motion and 2) an interparticle communication topology. Recent practice has focused on the fully connected topology (Gbest) despite earlier indications on the superiority of local particle neighborhoods. This paper seeks to address the controversy with empirical trials with canonical PSO on a large benchmark of functions, categorized into 14 properties. This paper confirms the early lore that Gbest is the overall better algorithm for unimodal and separable problems and that a ring neighborhood of connectivity two (Lbest) is the preferred choice for multimodal, nonseparable and composition functions. Topologies of intermediate particle connectivity were also tested and the difference in global/local performance was found to be even more marked. A measure of significant improvement is introduced in order to distinguish major improvements from refinements. Lbest, according to the experiments on the 84 test functions and a bi-modal problem of adjustable severity, is found to have significant improvements later in the run, and to be more diverse at termination. A mobility study shows that Lbest is better able to jump between optimum basins. Indeed Gbest was unable to switch basins in the bi-modal trial. The implication is that Lbest’s larger terminal diversity, its better ability to basin hop and its later significant improvement account for the performance enhancement. In several cases where Lbest was not the better algorithm, the trials show that Lbest was not stuck but would have continued to improve with an extended evaluation budget. Canonical PSO is a baseline algorithm and the ancestor of all contemporary PSO variants. These variants build on the basic structure of baseline PSO and the broad conclusions of this paper are expected to follow through. In particular, research that fails to consider local topologies risks underplaying the success of t-
e promoted algorithm.

Learning From a Stream of Nonstationary and Dependent Data in Multiobjective Evolutionary Optimization

Combining machine learning techniques has shown great potentials in evolutionary optimization since the domain knowledge of an optimization problem, if well learned, can be a great help for creating high-quality solutions. However, existing learning-ba…

Combining machine learning techniques has shown great potentials in evolutionary optimization since the domain knowledge of an optimization problem, if well learned, can be a great help for creating high-quality solutions. However, existing learning-based multiobjective evolutionary algorithms (MOEAs) spend too much computational overhead on learning. To address this problem, we propose a learning-based MOEA where an online learning algorithm is embedded within the evolutionary search procedure. The online learning algorithm takes the stream of sequentially generated solutions along the evolution as its training data. It is noted that the stream of solutions are temporal, dependent, nonstationary, and nonstatic. These data characteristics make existing online learning algorithm not suitable for the evolution data. We hence modify an existing online agglomerative clustering algorithm to accommodate these characteristics. The modified online clustering algorithm is applied to adaptively discover the structure of the Pareto optimal set; and the learned structure is used to guide new solution creation. Experimental results have shown significant improvement over four state-of-the-art MOEAs on a variety of benchmark problems.

Table of contents

Presents the table of contents for this issue of the publication.

Presents the table of contents for this issue of the publication.

A Multimodal Multiobjective Evolutionary Algorithm Using Two-Archive and Recombination Strategies

There have been few researches on solving multimodal multiobjective optimization problems, whereas they are commonly seen in real-world applications but difficult for the existing evolutionary optimizers. In this paper, we propose a novel multimodal mu…

There have been few researches on solving multimodal multiobjective optimization problems, whereas they are commonly seen in real-world applications but difficult for the existing evolutionary optimizers. In this paper, we propose a novel multimodal multiobjective evolutionary algorithm using two-archive and recombination strategies. In the proposed algorithm, the properties of decision variables and the relationships among them are analyzed at first to guide the evolutionary search. Then, a general framework using two archives, i.e., the convergence and the diversity archives, is adopted to cooperatively solve these problems. Moreover, the diversity archive simultaneously employs a clustering strategy to guarantee diversity in the objective space and a niche-based clearing strategy to promote the same in the decision space. At the end of evolution process, solutions in the convergence and the diversity archives are recombined to obtain a large number of multiple Pareto optimal solutions. In addition, a set of benchmark test functions and a performance metric are designed for multimodal multiobjective optimization. The proposed algorithm is empirically compared with two state-of-the-art evolutionary algorithms on these test functions. The comparative results demonstrate that the overall performance of the proposed algorithm is significantly superior to the competing algorithms.

Comprehensive Learning Particle Swarm Optimization Algorithm With Local Search for Multimodal Functions

A comprehensive learning particle swarm optimizer (CLPSO) embedded with local search (LS) is proposed to pursue higher optimization performance by taking the advantages of CLPSO’s strong global search capability and LS’s fast convergence ability. This paper proposes an adaptive LS starting strategy by utilizing our proposed quasi-entropy index to address its key issue, i.e., when to start LS. The changes of the index as the optimization proceeds are analyzed in theory and via numerical tests. The proposed algorithm is tested on multimodal benchmark functions. Parameter sensitivity analysis is performed to demonstrate its robustness. The comparison results reveal overall higher convergence rate and accuracy than those of CLPSO, state-of-the-art particle swarm optimization variants.

A comprehensive learning particle swarm optimizer (CLPSO) embedded with local search (LS) is proposed to pursue higher optimization performance by taking the advantages of CLPSO’s strong global search capability and LS’s fast convergence ability. This paper proposes an adaptive LS starting strategy by utilizing our proposed quasi-entropy index to address its key issue, i.e., when to start LS. The changes of the index as the optimization proceeds are analyzed in theory and via numerical tests. The proposed algorithm is tested on multimodal benchmark functions. Parameter sensitivity analysis is performed to demonstrate its robustness. The comparison results reveal overall higher convergence rate and accuracy than those of CLPSO, state-of-the-art particle swarm optimization variants.