Adaptive Multimodal Continuous Ant Colony Optimization

Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization (ACO) algorithms in preserving high diversity, this paper intends to extend ACO algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ACO algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima.

Comments Off on Adaptive Multimodal Continuous Ant Colony Optimization

Factored Evolutionary Algorithms

Factored evolutionary algorithms (FEAs) are a new class of evolutionary search-based optimization algorithms that have successfully been applied to various problems, such as training neural networks and performing abductive inference in graphical models. An FEA is unique in that it factors the objective function by creating overlapping subpopulations that optimize over a subset of variables of the function. In this paper, we give a formal definition of FEA algorithms and present empirical results related to their performance. One consideration in using an FEA is determining the appropriate factor architecture, which determines the set of variables each factor will optimize. For this reason, we present the results of experiments comparing the performance of different factor architectures on several standard applications for evolutionary algorithms. Additionally, we show that FEA’s performance is not restricted by the underlying optimization algorithm by creating FEA versions of hill climbing, particle swarm optimization, genetic algorithm, and differential evolution and comparing their performance to their single-population and cooperative coevolutionary counterparts.

Comments Off on Factored Evolutionary Algorithms

Performance of Decomposition-Based Many-Objective Algorithms Strongly Depends on Pareto Front Shapes

Recently, a number of high performance many-objective evolutionary algorithms with systematically generated weight vectors have been proposed in the literature. Those algorithms often show surprisingly good performance on widely used DTLZ and WFG test problems. The performance of those algorithms has continued to be improved. The aim of this paper is to show our concern that such a performance improvement race may lead to the overspecialization of developed algorithms for the frequently used many-objective test problems. In this paper, we first explain the DTLZ and WFG test problems. Next, we explain many-objective evolutionary algorithms characterized by the use of systematically generated weight vectors. Then we discuss the relation between the features of the test problems and the search mechanisms of weight vector-based algorithms such as multiobjective evolutionary algorithm based on decomposition (MOEA/D), nondominated sorting genetic algorithm III (NSGA-III), MOEA/dominance and decomposition (MOEA/DD), and θ-dominance based evolutionary algorithm (θ-DEA). Through computational experiments, we demonstrate that a slight change in the problem formulations of DTLZ and WFG deteriorates the performance of those algorithms. After explaining the reason for the performance deterioration, we discuss the necessity of more general test problems and more flexible algorithms.

Comments Off on Performance of Decomposition-Based Many-Objective Algorithms Strongly Depends on Pareto Front Shapes

Heterogeneous Cooperative Co-Evolution Memetic Differential Evolution Algorithm for Big Data Optimization Problems

Evolutionary algorithms (EAs) have recently been suggested as a candidate for solving big data optimization problems that involve a very large number of variables and need to be analyzed in a short period of time. However, EAs face a scalability issue when dealing with big data problems. Moreover, the performance of EAs critically hinges on the utilized parameter values and operator types, thus it is impossible to design a single EA that can outperform all others in every problem instance. To address these challenges, we propose a heterogeneous framework that integrates a cooperative co-evolution method with various types of memetic algorithms. We use the cooperative co-evolution method to split the big problem into subproblems in order to increase the efficiency of the solving process. The subproblems are then solved using various heterogeneous memetic algorithms. The proposed heterogeneous framework adaptively assigns, for each solution, different operators, parameter values and a local search algorithm to efficiently explore and exploit the search space of the given problem instance. The performance of the proposed algorithm is assessed using the Big Data 2015 competition benchmark problems that contain data with and without noise. Experimental results demonstrate that the proposed algorithm, with the cooperative co-evolution method, performs better than without the cooperative co-evolution method. Furthermore, it obtained very competitive results for all tested instances, if not better, when compared to other algorithms using lower computational times.

Comments Off on Heterogeneous Cooperative Co-Evolution Memetic Differential Evolution Algorithm for Big Data Optimization Problems

IEEE Transactions on Evolutionary Computation publication information

Provides a listing of the editorial board, current staff, committee members and society officers.

Comments Off on IEEE Transactions on Evolutionary Computation publication information

Table of contents

Presents the table of contents for this issue of the publication.

Comments Off on Table of contents

IEEE Transactions on Evolutionary Computation information for authors

These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.

Comments Off on IEEE Transactions on Evolutionary Computation information for authors

Quantifying Variable Interactions in Continuous Optimization Problems

Interactions between decision variables typically make an optimization problem challenging for an evolutionary algorithm (EA) to solve. Exploratory landscape analysis (ELA) techniques can be used to quantify the level of variable interactions in an optimization problem. However, many studies using ELA techniques to investigate interactions have been limited to combinatorial problems, with very few studies focused on continuous variables. In this paper, we propose a novel ELA measure to quantify the level of variable interactions in continuous optimization problems. We evaluated the efficacy of this measure using a suite of benchmark problems, consisting of 24 multidimensional continuous optimization functions with differing levels of variable interactions. Significantly, the results reveal that our measure is robust and can accurately identify variable interactions. We show that the solution quality found by an EA is correlated with the level of variable interaction in a given problem. Finally, we present the results from simulation experiments illustrating that when our measure is embedded into an algorithm design framework, the enhanced algorithm achieves equal or better results on the benchmark functions.

Comments Off on Quantifying Variable Interactions in Continuous Optimization Problems

IEEE World Congress on Computational Intelligence

Describes the above-named upcoming special issue or section. May include topics to be covered or calls for papers.

Comments Off on IEEE World Congress on Computational Intelligence

A Multiobjective Cooperative Coevolutionary Algorithm for Hyperspectral Sparse Unmixing

Sparse unmixing of hyperspectral data is an important technique aiming at estimating the fractional abundances of the endmembers. Traditional sparse unmixing is faced with the $boldsymbol {l_{0}}$ -norm problem which is an NP-hard problem. Sparse unmixing is inherently a multiobjective optimization problem. Most of the recent works combine cost functions into single one to construct an aggregate objective function, which involves weighted parameters that are sensitive to different data sets and difficult to tune. In this paper, a novel multiobjective cooperative coevolutionary algorithm is proposed to optimize the reconstruction term, the sparsity term and the total variation regularization term simultaneously. A problem-dependent cooperative coevolutionary strategy is designed because sparse unmixing encounters a large scale optimization problem. The proposed approach optimizes the nonconvex $boldsymbol {l_{0}}$ -norm problem directly and can find a better compromise between two or more competing cost function terms automatically. Experimental results on simulated and real hyperspectral data sets demonstrate the effectiveness of the proposed method.

Comments Off on A Multiobjective Cooperative Coevolutionary Algorithm for Hyperspectral Sparse Unmixing