Generalization of Pareto-Optimality for Many-Objective Evolutionary Optimization

The vast majority of multiobjective evolutionary algorithms presented to date are Pareto-based. Usually, these algorithms perform well for problems with few (two or three) objectives. However, due to the poor discriminability of Pareto-optimality in ma…

The vast majority of multiobjective evolutionary algorithms presented to date are Pareto-based. Usually, these algorithms perform well for problems with few (two or three) objectives. However, due to the poor discriminability of Pareto-optimality in many-objective spaces (typically four or more objectives), their effectiveness deteriorates progressively as the problem dimension increases. This paper generalizes Pareto-optimality both symmetrically and asymmetrically by expanding the dominance area of solutions to enhance the scalability of existing Pareto-based algorithms. The generalized Pareto-optimality (GPO) criteria are comparatively studied in terms of the distribution of ranks, the ranking landscape, and the convergence of the evolutionary process over several benchmark problems. The results indicate that algorithms equipped with a generalized optimality criterion can acquire the flexibility of changing their selection pressure within certain ranges, and achieve a richer variety of ranks to attain faster and better convergence on some subsets of the Pareto optima. To compensate for the possible diversity loss induced by the generalization, a distributed evolution framework with adaptive parameter setting is also proposed and briefly discussed. Empirical results indicate that this strategy is quite promising in diversity preservation for algorithms associated with the GPO.

Leveraged Neighborhood Restructuring in Cultural Algorithms for Solving Real-World Numerical Optimization Problems

Many researchers have developed population-based techniques to solve numerical optimization problems. Almost none of these techniques demonstrate consistent performance over a wide range of problems as these problems differ substantially in their chara…

Many researchers have developed population-based techniques to solve numerical optimization problems. Almost none of these techniques demonstrate consistent performance over a wide range of problems as these problems differ substantially in their characteristics. In the state-of-the-art cultural algorithms (CAs), problem solving is facilitated by the exchange of knowledge between a network of active knowledge sources in the belief space and networks of individuals in the population space. To enhance the performance of CAs, we restructure the social fabric interconnections to facilitate flexible communication among problem solvers in the population space. Several social network reconfiguration mechanisms and types of communications are examined. This extended CA is compared with other variants of CAs and other well-known state-of-the-art algorithms on a set of challenging real-world problems. The numerical results show that the injection of neighborhoods with flexible subnetworks enhances performance on a diverse landscape of numerical optimization problems.

Table of contents

Presents the table of contents for this issue of the periodical.

Presents the table of contents for this issue of the periodical.

Solving Bilevel Multicriterion Optimization Problems With Lower Level Decision Uncertainty

Bilevel optimization problems are characterized by a hierarchical leader-follower structure, in which the leader desires to optimize her own strategy taking the response of the follower into account. These problems are referred to as Stackelberg problems in the domain of game theory, and as bilevel problems in the domain of mathematical programming. In a number of practical scenarios, a bilevel problem is solved by a leader who needs to take multiple objectives into account and simultaneously deal with the decision uncertainty involved in modeling the follower’s behavior. Such problems are often encountered in strategic product design, homeland security applications, and taxation policy. However, the hierarchical nature makes the problem difficult to solve and they are commonly simplified by assuming a deterministic setup with smooth objective functions. In this paper, we focus our attention on the development of a flexible evolutionary algorithm for solving multicriterion bilevel problems with lower level (follower) decision uncertainty. The performance of the algorithm is evaluated in a comparative study on a number of test problems. In addition to the numerical experiments, we consider two real-world examples from the field of environmental economics and management to illustrate how the framework can be used to obtain optimal strategies.

Bilevel optimization problems are characterized by a hierarchical leader-follower structure, in which the leader desires to optimize her own strategy taking the response of the follower into account. These problems are referred to as Stackelberg problems in the domain of game theory, and as bilevel problems in the domain of mathematical programming. In a number of practical scenarios, a bilevel problem is solved by a leader who needs to take multiple objectives into account and simultaneously deal with the decision uncertainty involved in modeling the follower’s behavior. Such problems are often encountered in strategic product design, homeland security applications, and taxation policy. However, the hierarchical nature makes the problem difficult to solve and they are commonly simplified by assuming a deterministic setup with smooth objective functions. In this paper, we focus our attention on the development of a flexible evolutionary algorithm for solving multicriterion bilevel problems with lower level (follower) decision uncertainty. The performance of the algorithm is evaluated in a comparative study on a number of test problems. In addition to the numerical experiments, we consider two real-world examples from the field of environmental economics and management to illustrate how the framework can be used to obtain optimal strategies.

Call for nominations / Applications for the position of the Editor-in-Chief of IEEE Transactions on Emerging Topics in Computational Intelligence

In January 2017, IEEE Computational Intelligence Society is going to launch a new Transactions named IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI). The official scope of TETCI is reproduced below: The IEEE Transactions on E…

In January 2017, IEEE Computational Intelligence Society is going to launch a new Transactions named IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI). The official scope of TETCI is reproduced below: The IEEE Transactions on Emerging Topics in Computational Intelligence publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys. The Transactions on Emerging Topics in Computational Intelligence is financially sponsored by the IEEE Computational Intelligence Society and it is technically cosponsored by the Computer Society. We are looking for a suitable candidate to serve as the Editor-in-Chief of TETCI from January 1, 2017. To facilitate this, the IEEE CIS Executive Committee has formed an Ad hoc Search Committee to seek a suitable candidate to serve as the EIC of TETCI. The Search Committee solicits nominations /applications for this. Nominees/applicants should be dedicated volunteers with outstanding research profiles covering different facets of computational intelligence and extensive editorial experience. The nomination/application package should include complete CV along with a separate description (max 300 words/item) on each of the following items: Vision Statements; Editorial Experience; Summary of publishing experience in IEEE-CIS journals/magazine; IEEE CIS Volunteer Experience; Institutional Support; Networking with Community; and Why does the candidate consider himself/herself fit for this position? The nomination/application package should be emailed as a single PDF to N. R. Pal at nrpal59@gmail.com by April 15, 2016.

Balancing Convergence and Diversity in Decomposition-Based Many-Objective Optimizers

The decomposition-based multiobjective evolutionary algorithms (MOEAs) generally make use of aggregation functions to decompose a multiobjective optimization problem into multiple single-objective optimization problems. However, due to the nature of co…

The decomposition-based multiobjective evolutionary algorithms (MOEAs) generally make use of aggregation functions to decompose a multiobjective optimization problem into multiple single-objective optimization problems. However, due to the nature of contour lines for the adopted aggregation functions, they usually fail to preserve the diversity in high-dimensional objective space even by using diverse weight vectors. To address this problem, we propose to maintain the desired diversity of solutions in their evolutionary process explicitly by exploiting the perpendicular distance from the solution to the weight vector in the objective space, which achieves better balance between convergence and diversity in many-objective optimization. The idea is implemented to enhance two well-performing decomposition-based algorithms, i.e., MOEA, based on decomposition and ensemble fitness ranking. The two enhanced algorithms are compared to several state-of-the-art algorithms and a series of comparative experiments are conducted on a number of test problems from two well-known test suites. The experimental results show that the two proposed algorithms are generally more effective than their predecessors in balancing convergence and diversity, and they are also very competitive against other existing algorithms for solving many-objective optimization problems.

A Multiobjective Evolutionary Algorithm Based on Decision Variable Analyses for Multiobjective Optimization Problems With Large-Scale Variables

State-of-the-art multiobjective evolutionary algorithms (MOEAs) treat all the decision variables as a whole to optimize performance. Inspired by the cooperative coevolution and linkage learning methods in the field of single objective optimization, it …

State-of-the-art multiobjective evolutionary algorithms (MOEAs) treat all the decision variables as a whole to optimize performance. Inspired by the cooperative coevolution and linkage learning methods in the field of single objective optimization, it is interesting to decompose a difficult high-dimensional problem into a set of simpler and low-dimensional subproblems that are easier to solve. However, with no prior knowledge about the objective function, it is not clear how to decompose the objective function. Moreover, it is difficult to use such a decomposition method to solve multiobjective optimization problems (MOPs) because their objective functions are commonly conflicting with one another. That is to say, changing decision variables will generate incomparable solutions. This paper introduces interdependence variable analysis and control variable analysis to deal with the above two difficulties. Thereby, an MOEA based on decision variable analyses (DVAs) is proposed in this paper. Control variable analysis is used to recognize the conflicts among objective functions. More specifically, which variables affect the diversity of generated solutions and which variables play an important role in the convergence of population. Based on learned variable linkages, interdependence variable analysis decomposes decision variables into a set of low-dimensional subcomponents. The empirical studies show that DVA can improve the solution quality on most difficult MOPs. The code and supplementary material of the proposed algorithm are available at http://web.xidian.edu.cn/fliu/paper.html.

A Tunable Generator of Instances of Permutation-Based Combinatorial Optimization Problems

In this paper, we propose a tunable generator of instances of permutation-based combinatorial optimization problems. Our approach is based on a probabilistic model for permutations, called the generalized Mallows model. The generator depends on a set o…

In this paper, we propose a tunable generator of instances of permutation-based combinatorial optimization problems. Our approach is based on a probabilistic model for permutations, called the generalized Mallows model. The generator depends on a set of parameters that permits the control of the properties of the output instances. Specifically, in order to create an instance, we solve a linear programming problem in the parameters, where the restrictions allow the instance to have a fixed number of local optima and the linear function encompasses qualitative characteristics of the instance. We exemplify the use of the generator by giving three distinct linear functions that produce three landscapes with different qualitative properties. After that, our generator is tested in two different ways. First, we test the flexibility of the model by producing instances similar to benchmark instances. Second, we account for the capacity of the generator to create different types of instances according to the difficulty for population-based algorithms. We study the influence of the input parameters in the behaviors of these algorithms, giving an example of a property that can be used to analyze their performance.

Average Convergence Rate of Evolutionary Algorithms

In evolutionary optimization, it is important to understand how fast evolutionary algorithms converge to the optimum per generation, or their convergence rates. This letter proposes a new measure of the convergence rate, called the average convergence …

In evolutionary optimization, it is important to understand how fast evolutionary algorithms converge to the optimum per generation, or their convergence rates. This letter proposes a new measure of the convergence rate, called the average convergence rate. It is a normalized geometric mean of the reduction ratio of the fitness difference per generation. The calculation of the average convergence rate is very simple and it is applicable for most evolutionary algorithms on both continuous and discrete optimization. A theoretical study of the average convergence rate is conducted for discrete optimization. Lower bounds on the average convergence rate are derived. The limit of the average convergence rate is analyzed and then the asymptotic average convergence rate is proposed.