Indicator-Based Evolutionary Algorithm for Solving Constrained Multiobjective Optimization Problems

To prevent the population from getting stuck in local areas and then missing the constrained Pareto front fragments in dealing with constrained multiobjective optimization problems (CMOPs), it is important to guide the population to evenly explore the …

To prevent the population from getting stuck in local areas and then missing the constrained Pareto front fragments in dealing with constrained multiobjective optimization problems (CMOPs), it is important to guide the population to evenly explore the promising areas that are not dominated by all examined feasible solutions. To this end, we first introduce a cost value-based distance into the objective space, and then use this distance and the constraints to define an indicator to evaluate the contribution of each individual to exploring the promising areas. Theoretical studies show that the proposed indicator can effectively guide population to focus on exploring the promising areas without crowding in local areas. Accordingly, we propose a new constraint handling technique (CHT) based on this indicator. To further improve the diversity of population in the promising areas, the proposed indicator-based CHT divides the promising areas into multiple subregions, and then gives priority to removing the individuals with the worst fitness values in the densest subregions. We embed the indicator-based CHT in evolutionary algorithm and propose an indicator-based constrained multiobjective algorithm for solving CMOPs. Numerical experiments on several benchmark suites show the effectiveness of the proposed algorithm. Compared with six state-of-the-art constrained evolutionary multiobjective optimization algorithms, the proposed algorithm performs better in dealing with different types of CMOPs, especially in those problems that the individuals are easy to appear in the local infeasible areas that dominate the constrained Pareto front fragments.

Table of Contents

Presents the table of contents for this issue of this publication.

Presents the table of contents for this issue of this publication.

Table of Contents

Presents the table of contents for this issue of this publication.

Presents the table of contents for this issue of this publication.

Evolutionary Machine Learning With Minions: A Case Study in Feature Selection

Many decisions in a machine learning (ML) pipeline involve nondifferentiable and discontinuous objectives and search spaces. Examples include feature selection, model selection, and hyperparameter tuning, where candidate solutions in an outer optimizat…

Many decisions in a machine learning (ML) pipeline involve nondifferentiable and discontinuous objectives and search spaces. Examples include feature selection, model selection, and hyperparameter tuning, where candidate solutions in an outer optimization loop must be evaluated via a learning subsystem. Evolutionary algorithms (EAs) are prominent gradient-free methods to handle such tasks. However, EAs are known to pose steep computational challenges, especially when dealing with large-instance datasets. As opposed to prior works that often fall back on parallel computing hardware to resolve this big data problem of EAs, in this article, we propose a novel algorithm-centric solution based on evolutionary multitasking. Our approach involves the creation of a band of minions, i.e., small data proxies to the main target task, that are constructed by subsampling a fraction of the large dataset. We then combine the minions with the main task in a single multitask optimization framework, boosting evolutionary search by using small data to quickly optimize for the large dataset. Our key algorithmic contribution in this setting is to allocate computational resources to each of the tasks in a principled manner. The article considers wrapper-based feature selection as an illustrative case study of the broader idea of using multitasking to speedup outer loop evolutionary configurations of any ML subsystem. The experiments reveal that multitasking can indeed speedup baseline EAs, by more than 40% on some datasets.

Evolutionary Machine Learning With Minions: A Case Study in Feature Selection

Many decisions in a machine learning (ML) pipeline involve nondifferentiable and discontinuous objectives and search spaces. Examples include feature selection, model selection, and hyperparameter tuning, where candidate solutions in an outer optimizat…

Many decisions in a machine learning (ML) pipeline involve nondifferentiable and discontinuous objectives and search spaces. Examples include feature selection, model selection, and hyperparameter tuning, where candidate solutions in an outer optimization loop must be evaluated via a learning subsystem. Evolutionary algorithms (EAs) are prominent gradient-free methods to handle such tasks. However, EAs are known to pose steep computational challenges, especially when dealing with large-instance datasets. As opposed to prior works that often fall back on parallel computing hardware to resolve this big data problem of EAs, in this article, we propose a novel algorithm-centric solution based on evolutionary multitasking. Our approach involves the creation of a band of minions, i.e., small data proxies to the main target task, that are constructed by subsampling a fraction of the large dataset. We then combine the minions with the main task in a single multitask optimization framework, boosting evolutionary search by using small data to quickly optimize for the large dataset. Our key algorithmic contribution in this setting is to allocate computational resources to each of the tasks in a principled manner. The article considers wrapper-based feature selection as an illustrative case study of the broader idea of using multitasking to speedup outer loop evolutionary configurations of any ML subsystem. The experiments reveal that multitasking can indeed speedup baseline EAs, by more than 40% on some datasets.

IEEE Access

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

IEEE Access

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Transfer Learning-Based Parallel Evolutionary Algorithm Framework for Bilevel Optimization

Evolutionary algorithms (EAs) have been recognized as a promising approach for bilevel optimization. However, the population-based characteristic of EAs largely influences their efficiency and effectiveness due to the nested structure of the two levels…

Evolutionary algorithms (EAs) have been recognized as a promising approach for bilevel optimization. However, the population-based characteristic of EAs largely influences their efficiency and effectiveness due to the nested structure of the two levels of optimization problems. In this article, we propose a transfer learning-based parallel EA (TLEA) framework for bilevel optimization. In this framework, the task of optimizing a set of lower level problems parameterized by upper level variables is conducted in a parallel manner. In the meanwhile, a transfer learning strategy is developed to improve the effectiveness of each lower level search (LLS) process. In practice, we implement two versions of the TLEA: the first version uses the covariance matrix adaptation evolutionary strategy and the second version uses the differential evolution as the evolutionary operator in lower level optimization. The experimental studies on two sets of widely used bilevel optimization benchmark problems are conducted, and the performance of the two TLEA implementations is compared to that of four well-established evolutionary bilevel optimization algorithms to verify the effectiveness and efficiency of the proposed algorithm framework.