Optimal-Margin Evolutionary Classifier

We introduce a novel approach for discriminative classification using evolutionary algorithms. We first propose an algorithm to optimize the total loss value using a modified 0–1 loss function in a 1-D space for classification. We then extend this algorithm for multidimensional classification using an evolutionary algorithm. The proposed evolutionary algorithm aims to find a hyperplane that best classifies instances while minimizing the classification risk. We test particle swarm optimization, evolutionary strategy (ES), and covariance matrix adaptation ES for optimization purposes. After parameter selection, we compare our results with well-established and state-of-the-art classification algorithms, for both binary and multiclass classification, on 23 benchmark classification problems, with and without noise and outliers. We also compare these methods on a seizure detection task for 12 epileptic patients. Results show that the performance of the proposed algorithm is significantly (Wilcoxon test) better than all other methods in almost all problems tested. We also show that the proposed algorithm is significantly more robust against noise and outliers comparing to other methods. The running time of the algorithm is within a reasonable range for the solution of real-world classification problems.

We introduce a novel approach for discriminative classification using evolutionary algorithms. We first propose an algorithm to optimize the total loss value using a modified 0–1 loss function in a 1-D space for classification. We then extend this algorithm for multidimensional classification using an evolutionary algorithm. The proposed evolutionary algorithm aims to find a hyperplane that best classifies instances while minimizing the classification risk. We test particle swarm optimization, evolutionary strategy (ES), and covariance matrix adaptation ES for optimization purposes. After parameter selection, we compare our results with well-established and state-of-the-art classification algorithms, for both binary and multiclass classification, on 23 benchmark classification problems, with and without noise and outliers. We also compare these methods on a seizure detection task for 12 epileptic patients. Results show that the performance of the proposed algorithm is significantly (Wilcoxon test) better than all other methods in almost all problems tested. We also show that the proposed algorithm is significantly more robust against noise and outliers comparing to other methods. The running time of the algorithm is within a reasonable range for the solution of real-world classification problems.

A New Two-Stage Evolutionary Algorithm for Many-Objective Optimization

Convergence and diversity are interdependently handled during the evolutionary process by most existing many-objective evolutionary algorithms (MaOEAs). In such a design, the degraded performance of one would deteriorate the other, and only solutions w…

Convergence and diversity are interdependently handled during the evolutionary process by most existing many-objective evolutionary algorithms (MaOEAs). In such a design, the degraded performance of one would deteriorate the other, and only solutions with both are able to improve the performance of MaOEAs. Unfortunately, it is not easy to constantly maintain a population of solutions with both convergence and diversity. In this paper, an MaOEA based on two independent stages is proposed for effectively solving many-objective optimization problems (MaOPs), where the convergence and diversity are addressed in two independent and sequential stages. To achieve this, we first propose a nondominated dynamic weight aggregation method by using a genetic algorithm, which is capable of finding the Pareto-optimal solutions for MaOPs with concave, convex, linear and even mixed Pareto front shapes, and then these solutions are employed to learn the Pareto-optimal subspace for the convergence. Afterward, the diversity is addressed by solving a set of single-objective optimization problems with reference lines within the learned Pareto-optimal subspace. To evaluate the performance of the proposed algorithm, a series of experiments are conducted against six state-of-the-art MaOEAs on benchmark test problems. The results show the significantly improved performance of the proposed algorithm over the peer competitors. In addition, the proposed algorithm can focus directly on a chosen part of the objective space if the preference area is known beforehand. Furthermore, the proposed algorithm can also be used to effectively find the nadir points.

Evolutionary Multitasking Sparse Reconstruction: Framework and Case Study

Real-world applications typically have multiple sparse reconstruction tasks to be optimized. In order to exploit the similar sparsity pattern between different tasks, this paper establishes an evolutionary multitasking framework to simultaneously optim…

Real-world applications typically have multiple sparse reconstruction tasks to be optimized. In order to exploit the similar sparsity pattern between different tasks, this paper establishes an evolutionary multitasking framework to simultaneously optimize multiple sparse reconstruction tasks using a single population. In the proposed method, the evolutionary algorithm aims to search the locations of nonzero components or rows instead of searching sparse vector or matrix directly. Then the within-task and between-task genetic transfer operators are employed to reinforce the exchange of genetic material belonging to the same or different tasks. The proposed method can solve multiple measurement vector problems efficiently because the length of decision vector is independent of the number of measurement vectors. Finally, a case study on hyperspectral image unmixing is investigated in an evolutionary multitasking setting. It is natural to consider a sparse unmixing problem in a homogeneous region as a task. Experiments on signal reconstruction and hyperspectral image unmixing demonstrate the effectiveness of the proposed multitasking framework for sparse reconstruction.

Evolutionary Multitasking With Dynamic Resource Allocating Strategy

Evolutionary multitasking is a recently proposed paradigm to simultaneously solve multiple tasks using a single population. Most of the existing evolutionary multitasking algorithms treat all tasks equally and then assign the same amount of resources t…

Evolutionary multitasking is a recently proposed paradigm to simultaneously solve multiple tasks using a single population. Most of the existing evolutionary multitasking algorithms treat all tasks equally and then assign the same amount of resources to each task. However, when the resources are limited, it is difficult for some tasks to converge to acceptable solutions. This paper aims at investigating the resource allocation in the multitasking environment to efficiently utilize the restrictive resources. In this paper, we design a novel multitask evolutionary algorithm with an online dynamic resource allocation strategy. Specifically, the proposed dynamic resource allocation strategy allocates resources to each task adaptively according to the requirements of tasks. We also design an adaptive method to control the resources invested into cross-domain searching. The proposed algorithm is able to allocate the computational resources dynamically according to the computational complexities of tasks. The experimental results demonstrate the superiority of the proposed method in comparison with the state-of-the-art algorithms on benchmark problems of multitask optimization.

Distance-Based Subset Selection for Benchmarking in Evolutionary Multi/Many-Objective Optimization

A number of real-world problems involve extremization of multiple conflicting objectives, referred to as multiobjective optimization problems. Multiobjective evolutionary algorithms (MOEAs) have been widely adopted to obtain Pareto front (PF) approxima…

A number of real-world problems involve extremization of multiple conflicting objectives, referred to as multiobjective optimization problems. Multiobjective evolutionary algorithms (MOEAs) have been widely adopted to obtain Pareto front (PF) approximation for such problems. An indispensable step in development and evaluation of MOEAs is benchmarking, which involves comparisons with peer algorithms using performance metrics, such as hypervolume (HV) and inverted generational distance (IGD). However, the de-facto practice is to use the final population of an algorithm for evaluating these metrics even though a better PF approximation may exist within the archive of all evaluated solutions. In a recent study, a distance-based subset selection (DSS) method was discussed for selecting prespecified number of solutions from an archive for benchmarking. This letter aims to contribute toward this direction in two ways. First is to develop a theoretical understanding of DSS and reveal some of its interesting and desirable properties. These include conditional equivalence to optimal HV/IGD subset selection, inclusion of PF extremities and invariance to convexity/concavity and orientation of the PF. Secondly, we present numerical experiments on problems with regular and irregular PFs up to ten objectives, and compare the approach with other selection techniques to demonstrate its potential benefits. The results clearly indicate the importance of considering the archive of solutions and appropriate selection mechanisms, in particular for problems with irregular PFs, to avoid misjudgments about relative performances. With increasing emphasis on tackling such problems in the field, we believe that this observation and analysis is timely and significant, not only for benchmarking, but also for subsequent improvements in decision-making and algorithm design.

GPEM 20(3) is now available

The third issue of Volume 20 of Genetic Programming and Evolvable Machines is now available for download.

It contains:

A genetic programming framework in the automatic design of combination models for salient object detection
by Marco A. Contreras-Cruz, Diana E. Martinez-Rodriguez, Uriel H. Hernandez-Belmonte & Victor Ayala-Ramirez

Stochastic synthesis of recursive functions made easy with bananas, lenses, envelopes and barbed wire
by Jerry Swan, Krzysztof Krawiec & Zoltan A Kocsis

Local search in speciation-based bloat control for genetic programming
by Perla Juárez-Smith, Leonardo Trujillo, Mario García-Valdez, Francisco Fernández de Vega & Francisco Chávez

GP-based methods for domain adaptation: using brain decoding across subjects as a test-case
by Roberto Santana, Luis Marti & Mengjie Zhang

Evolving autoencoding structures through genetic programming
by Lino Rodriguez-Coayahuitl, Alicia Morales-Reyes & Hugo Jair Escalante

The third issue of Volume 20 of Genetic Programming and Evolvable Machines is now available for download.

It contains:

A genetic programming framework in the automatic design of combination models for salient object detection
by Marco A. Contreras-Cruz, Diana E. Martinez-Rodriguez, Uriel H. Hernandez-Belmonte & Victor Ayala-Ramirez

Stochastic synthesis of recursive functions made easy with bananas, lenses, envelopes and barbed wire
by Jerry Swan, Krzysztof Krawiec & Zoltan A Kocsis

Local search in speciation-based bloat control for genetic programming
by Perla Juárez-Smith, Leonardo Trujillo, Mario García-Valdez, Francisco Fernández de Vega & Francisco Chávez

GP-based methods for domain adaptation: using brain decoding across subjects as a test-case
by Roberto Santana, Luis Marti & Mengjie Zhang

Evolving autoencoding structures through genetic programming
by Lino Rodriguez-Coayahuitl, Alicia Morales-Reyes & Hugo Jair Escalante

GPEM 20(3) is now available

The third issue of Volume 20 of Genetic Programming and Evolvable Machines is now available for download.

It contains:

A genetic programming framework in the automatic design of combination models for salient object detection
by Marco A. Contreras-Cruz, Diana E. Martinez-Rodriguez, Uriel H. Hernandez-Belmonte & Victor Ayala-Ramirez

Stochastic synthesis of recursive functions made easy with bananas, lenses, envelopes and barbed wire
by Jerry Swan, Krzysztof Krawiec & Zoltan A Kocsis

Local search in speciation-based bloat control for genetic programming
by Perla Juárez-Smith, Leonardo Trujillo, Mario García-Valdez, Francisco Fernández de Vega & Francisco Chávez

GP-based methods for domain adaptation: using brain decoding across subjects as a test-case
by Roberto Santana, Luis Marti & Mengjie Zhang

Evolving autoencoding structures through genetic programming
by Lino Rodriguez-Coayahuitl, Alicia Morales-Reyes & Hugo Jair Escalante

The third issue of Volume 20 of Genetic Programming and Evolvable Machines is now available for download.

It contains:

A genetic programming framework in the automatic design of combination models for salient object detection
by Marco A. Contreras-Cruz, Diana E. Martinez-Rodriguez, Uriel H. Hernandez-Belmonte & Victor Ayala-Ramirez

Stochastic synthesis of recursive functions made easy with bananas, lenses, envelopes and barbed wire
by Jerry Swan, Krzysztof Krawiec & Zoltan A Kocsis

Local search in speciation-based bloat control for genetic programming
by Perla Juárez-Smith, Leonardo Trujillo, Mario García-Valdez, Francisco Fernández de Vega & Francisco Chávez

GP-based methods for domain adaptation: using brain decoding across subjects as a test-case
by Roberto Santana, Luis Marti & Mengjie Zhang

Evolving autoencoding structures through genetic programming
by Lino Rodriguez-Coayahuitl, Alicia Morales-Reyes & Hugo Jair Escalante