Hypervolume-Optimal <italic>μ</italic>-Distributions on Line/Plane-Based Pareto Fronts in Three Dimensions

Hypervolume is widely used in the evolutionary multiobjective optimization (EMO) field to evaluate the quality of a solution set. For a solution set with $mu $ solutions on a Pareto front, a larger hypervolume means a better solution set. Investigati…

Hypervolume is widely used in the evolutionary multiobjective optimization (EMO) field to evaluate the quality of a solution set. For a solution set with $mu $ solutions on a Pareto front, a larger hypervolume means a better solution set. Investigating the distribution of the solution set with the largest hypervolume is an important topic in EMO, which is the so-called hypervolume-optimal $mu $ -distribution. Theoretical results have shown that the $mu $ solutions are uniformly distributed on a linear Pareto front in two dimensions. However, the $mu $ solutions are not always uniformly distributed on a single-line Pareto front in three dimensions. They are only uniform when the single-line Pareto front has one constant objective. In this article, we further investigate the hypervolume-optimal $mu $ -distribution in three dimensions. We consider the line-based and plane-based Pareto fronts. For the line-based Pareto fronts, we extend the single-line Pareto front to two-line and three-line Pareto fronts, where each line has one constant objective. For the plane-based Pareto fronts, the linear triangular and inverted triangular Pareto fronts are considered. First, we show that the $mu $ solutions are not always uniformly distributed on the line-based Pareto fronts. The uniformity depends on how the lines are combined. Then, we show that a uniform solution set on the plane-based Pareto front is not always optimal for hypervolume maximization. It is locally optimal with respect to a $(mu +1)$-
selection scheme. Our results can help researchers in the community to better understand and utilize the hypervolume indicator.

Indicator-Based Evolutionary Algorithm for Solving Constrained Multiobjective Optimization Problems

To prevent the population from getting stuck in local areas and then missing the constrained Pareto front fragments in dealing with constrained multiobjective optimization problems (CMOPs), it is important to guide the population to evenly explore the …

To prevent the population from getting stuck in local areas and then missing the constrained Pareto front fragments in dealing with constrained multiobjective optimization problems (CMOPs), it is important to guide the population to evenly explore the promising areas that are not dominated by all examined feasible solutions. To this end, we first introduce a cost value-based distance into the objective space, and then use this distance and the constraints to define an indicator to evaluate the contribution of each individual to exploring the promising areas. Theoretical studies show that the proposed indicator can effectively guide population to focus on exploring the promising areas without crowding in local areas. Accordingly, we propose a new constraint handling technique (CHT) based on this indicator. To further improve the diversity of population in the promising areas, the proposed indicator-based CHT divides the promising areas into multiple subregions, and then gives priority to removing the individuals with the worst fitness values in the densest subregions. We embed the indicator-based CHT in evolutionary algorithm and propose an indicator-based constrained multiobjective algorithm for solving CMOPs. Numerical experiments on several benchmark suites show the effectiveness of the proposed algorithm. Compared with six state-of-the-art constrained evolutionary multiobjective optimization algorithms, the proposed algorithm performs better in dealing with different types of CMOPs, especially in those problems that the individuals are easy to appear in the local infeasible areas that dominate the constrained Pareto front fragments.

Table of Contents

Presents the table of contents for this issue of this publication.

Presents the table of contents for this issue of this publication.

Table of Contents

Presents the table of contents for this issue of this publication.

Presents the table of contents for this issue of this publication.

Evolutionary Machine Learning With Minions: A Case Study in Feature Selection

Many decisions in a machine learning (ML) pipeline involve nondifferentiable and discontinuous objectives and search spaces. Examples include feature selection, model selection, and hyperparameter tuning, where candidate solutions in an outer optimizat…

Many decisions in a machine learning (ML) pipeline involve nondifferentiable and discontinuous objectives and search spaces. Examples include feature selection, model selection, and hyperparameter tuning, where candidate solutions in an outer optimization loop must be evaluated via a learning subsystem. Evolutionary algorithms (EAs) are prominent gradient-free methods to handle such tasks. However, EAs are known to pose steep computational challenges, especially when dealing with large-instance datasets. As opposed to prior works that often fall back on parallel computing hardware to resolve this big data problem of EAs, in this article, we propose a novel algorithm-centric solution based on evolutionary multitasking. Our approach involves the creation of a band of minions, i.e., small data proxies to the main target task, that are constructed by subsampling a fraction of the large dataset. We then combine the minions with the main task in a single multitask optimization framework, boosting evolutionary search by using small data to quickly optimize for the large dataset. Our key algorithmic contribution in this setting is to allocate computational resources to each of the tasks in a principled manner. The article considers wrapper-based feature selection as an illustrative case study of the broader idea of using multitasking to speedup outer loop evolutionary configurations of any ML subsystem. The experiments reveal that multitasking can indeed speedup baseline EAs, by more than 40% on some datasets.

Evolutionary Machine Learning With Minions: A Case Study in Feature Selection

Many decisions in a machine learning (ML) pipeline involve nondifferentiable and discontinuous objectives and search spaces. Examples include feature selection, model selection, and hyperparameter tuning, where candidate solutions in an outer optimizat…

Many decisions in a machine learning (ML) pipeline involve nondifferentiable and discontinuous objectives and search spaces. Examples include feature selection, model selection, and hyperparameter tuning, where candidate solutions in an outer optimization loop must be evaluated via a learning subsystem. Evolutionary algorithms (EAs) are prominent gradient-free methods to handle such tasks. However, EAs are known to pose steep computational challenges, especially when dealing with large-instance datasets. As opposed to prior works that often fall back on parallel computing hardware to resolve this big data problem of EAs, in this article, we propose a novel algorithm-centric solution based on evolutionary multitasking. Our approach involves the creation of a band of minions, i.e., small data proxies to the main target task, that are constructed by subsampling a fraction of the large dataset. We then combine the minions with the main task in a single multitask optimization framework, boosting evolutionary search by using small data to quickly optimize for the large dataset. Our key algorithmic contribution in this setting is to allocate computational resources to each of the tasks in a principled manner. The article considers wrapper-based feature selection as an illustrative case study of the broader idea of using multitasking to speedup outer loop evolutionary configurations of any ML subsystem. The experiments reveal that multitasking can indeed speedup baseline EAs, by more than 40% on some datasets.

IEEE Access

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

IEEE Access

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.

Prospective authors are requested to submit new, unpublished manuscripts for inclusion in the upcoming event described in this call for papers.