IEEE Transactions on Evolutionary Computation information for authors

Comments Off

Evolving an Improved Algorithm for Envelope Reduction Using a Hyper-Heuristic Approach

Large sparse symmetric matrices are typical characteristics of the linear systems found in various scientific and engineering disciplines, such as fluid mechanics, structural engineering, finite element analysis, and network analysis. In all such systems, the performance of solvers crucially depends on the sum of the distance of each matrix element from the matrix’s main diagonal. This quantity is known as the envelope. In order to reduce the envelope of a matrix, its rows and columns should be reordered properly. The problem of minimizing the envelope size is exceptionally hard and known to be NP-complete. A considerable number of methods have been developed for reducing the envelope among which the Sloan algorithm offered a substantial improvement over previous algorithms. In this paper, a hyper-heuristic approach based on genetic programming, which we term genetic hyper-heuristic, is introduced for evolving an enhanced version of the Sloan algorithm. We also present a local search algorithm and integrate it with the new algorithm produced by our hyper-heuristic to further improve the quality of the solutions. The new algorithms are evaluated on a large set of standard benchmarks against six state-of-the-art algorithms from the literature. Our algorithms outperform all the methods under testing by a wide margin.

Comments Off

IEEE Xplore Digital Library

Comments Off

Multiobjective Estimation of Distribution Algorithm Based on Joint Modeling of Objectives and Variables

This paper proposes a new multiobjective estimation of distribution algorithm (EDA) based on joint probabilistic modeling of objectives and variables. This EDA uses the multidimensional Bayesian network as its probabilistic model. In this way, it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learned between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm find better tradeoff solutions to the multiobjective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multiobjective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is first applied to the set of walking fish group problems, and its optimization performance is compared with a standard multiobjective evolutionary algorithm and another competitive multiobjective EDA. The experimental results show that on several of these problems, and for different objective space dimensions, the proposed algorithm performs significantly better and on some others achieves comparable results when compared with the other two algorithms. The algorithm is then tested on the set of CEC09 problems, where the results show that multiobjective optimization based on joint model estimation is able to obtain considerably better fronts for some of the problems compared with the search based on conventional genetic operators in the state-of-the-art multiobjective evolutionary algorithms.

Comments Off

An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach

In the precursor paper, a many-objective optimization method (NSGA-III), based on the NSGA-II framework, was suggested and applied to a number of unconstrained test and practical problems with box constraints alone. In this paper, we extend NSGA-III to solve generic constrained many-objective optimization problems. In the process, we also suggest three types of constrained test problems that are scalable to any number of objectives and provide different types of challenges to a many-objective optimizer. A previously suggested MOEA/D algorithm is also extended to solve constrained problems. Results using constrained NSGA-III and constrained MOEA/D show an edge of the former, particularly in solving problems with a large number of objectives. Furthermore, the NSGA-III algorithm is made adaptive in updating and including new reference points on the fly. The resulting adaptive NSGA-III is shown to provide a denser representation of the Pareto-optimal front, compared to the original NSGA-III with an identical computational effort. This, and the original NSGA-III paper, together suggest and amply test a viable evolutionary many-objective optimization algorithm for handling constrained and unconstrained problems. These studies should encourage researchers to use and pay further attention in evolutionary many-objective optimization.

Comments Off

Ant Colony Optimization for Mixed-Variable Optimization Problems

In this paper, we introduce ${rm ACO}_{bf MV}$ : an ant colony optimization (ACO) algorithm that extends the ${rm ACO}_{BBR}$ algorithm for continuous optimization to tackle mixed-variable optimization problems. In ${rm ACO}_{bf MV}$ , the decision variables of an optimization problem can be explicitly declared as continuous, ordinal, or categorical, which allows the algorithm to treat them adequately. ${rm ACO}_{bf MV}$ includes three solution generation mechanisms: a continuous optimization mechanism $({rm ACO}_{BBR})$ , a continuous relaxation mechanism $({rm ACO}_{bf MV}hbox{-}{rm o})$ for ordinal variables, and a categorical optimization mechanism $({rm ACO}_{bf MV}hbox{-}{rm c})$ for categorical variables. Together, these mechanisms allow ${rm ACO}_{bf MV}$ to tackle mixed-variable optimization problems. We also define a novel procedure to generate artificial, mixed-variable benchmark functions, and we use it to automatically tune ${rm ACO}_{bf MV}$ ‘s parameters. The tuned ${rm ACO}_{bf MV}$ is tested on various real-world continuous and mixed-variable engineering optimization problems. Comparisons with results from the literature demonstrate the effectiveness and robustness of ${rm ACO}_{bf MV}$ <-
tex-math>
on mixed-variable optimization problems.

Comments Off

Evolving Classifiers to Recognize the Movement Characteristics of Parkinson’s Disease Patients

Parkinson’s disease is a debilitating neurological condition that affects approximately 1 in 500 people and often leads to severe disability. To improve clinical care, better assessment tools are needed that increase the accuracy of differential diagnosis and disease monitoring. In this paper, we report how we have used evolutionary algorithms to induce classifiers capable of recognizing the movement characteristics of Parkinson’s disease patients. These diagnostically relevant patterns of movement are known to occur over multiple time scales. To capture this, we used two different classifer architectures: sliding-window genetic programming classifiers, which model over-represented local patterns that occur within time series data, and artificial biochemical networks, computational dynamical systems that respond to dynamical patterns occurring over longer time scales. Classifiers were trained and validated using movement recordings of 49 patients and 41 age-matched controls collected during a recent clinical study. By combining classifiers with diverse behaviors, we were able to construct classifier ensembles with diagnostic accuracies in the region of 95%, comparable to the accuracies achieved by expert clinicians. Further analysis indicated a number of features of diagnostic relevance, including the differential effect of handedness and the over-representation of certain patterns of acceleration.

Comments Off

Quick Hypervolume

In this paper, we present a new algorithm for calculating exact hypervolumes. Given a set of $d$ -dimensional points, it computes the hypervolume of the dominated space. Determining this value is an important subroutine of multiobjective evolutionary algorithms. We analyze the quick hypervolume (QHV) algorithm theoretically and experimentally. The theoretical results are a significant contribution to the current state of the art. Moreover, the experimental performance is also very competitive, compared with existing exact hypervolume algorithms.

Comments Off

IEEE Computational Intelligence Society Information

Comments Off

Reusing Building Blocks of Extracted Knowledge to Solve Complex, Large-Scale Boolean Problems

Evolutionary computation techniques have had limited capabilities in solving large-scale problems due to the large search space demanding large memory and much longer training times. In the work presented here, a genetic programming like rich encoding scheme has been constructed to identify building blocks of knowledge in a learning classifier system. The fitter building blocks from the learning system trained against smaller problems have been utilized in a higher complexity problem in the domain to achieve scalable learning. The proposed system has been examined and evaluated on four different Boolean problem domains: 1) multiplexer, 2) majority-on, 3) carry, and 4) even-parity problems. The major contribution of this paper is to successfully extract useful building blocks from smaller problems and reuse them to learn more complex large-scale problems in the domain, e.g., 135-bit multiplexer problem, where the number of possible instances is $2^{135}approx 4times10^{40}$ , is solved by reusing the extracted knowledge from the learned lower level solutions in the domain. Autonomous scaling is, for the first time, shown to be possible in learning classifier systems. It improves effectiveness and reduces the number of training instances required in large problems, but requires more time due to its sequential build-up of knowledge.

Comments Off