Evolutionary Search With Multiview Prediction for Dynamic Multiobjective Optimization

Dynamic multiobjective optimization problem (DMOP) denotes the multiobjective optimization problem which varies over time. As changes in DMOP may exist some patterns that are predictable, to solve DMOP, a number of research efforts have been made to de…

Dynamic multiobjective optimization problem (DMOP) denotes the multiobjective optimization problem which varies over time. As changes in DMOP may exist some patterns that are predictable, to solve DMOP, a number of research efforts have been made to develop evolutionary search with prediction approaches to estimate the changes of the problem. A common practice of existing prediction approaches is to predict the change of Pareto-optimal solutions (POS) based on the historical solutions obtained in the decision space. However, the change of a DMOP may occur in both decision and objective spaces. Prediction only in the decision space thus may not be able to give the proper estimation of the problem change. Taking this cue, in this article, we propose an evolutionary search with multiview prediction for solving DMOP. In contrast to existing prediction methods, the proposed approach conducts prediction from the views of both decision and objective spaces. To estimate dynamic changes in DMOP, a kernelized autoencoding model is derived to perform the multiview prediction in a reproducing kernel Hilbert space (RKHS), which holds a closed-form solution. To examine the performance of the proposed method, comprehensive empirical studies on the commonly used DMOP benchmarks, as well as a real-world case study on the movie recommendation problem, are presented. The obtained experimental results verified the efficacy of the proposed method for solving both benchmark and real-world DMOPs.

Memristor Parallel Computing for a Matrix-Friendly Genetic Algorithm

Matrix operation is easy to be paralleled by hardware, and the memristor network can realize a parallel matrix computing model with in-memory computing. This article proposes a matrix-friendly genetic algorithm (MGA), in which the population is represe…

Matrix operation is easy to be paralleled by hardware, and the memristor network can realize a parallel matrix computing model with in-memory computing. This article proposes a matrix-friendly genetic algorithm (MGA), in which the population is represented by a matrix and the evolution of population is realized by matrix operations. Compared with the performance of a baseline genetic algorithm (GA) on solving the maximum value of the binary function, MGA can converge better and faster. In addition, MGA is more efficient because of its parallelism on matrix operations, and MGA runs 2.5 times faster than the baseline GA when using the NumPy library. Considering the advantages of the memristor in matrix operations, memristor circuits are designed for the deployment of MGA. This deployment method realizes the parallelization and in-memory computing (memristor is both memory and computing unit) of MGA. In order to verify the effectiveness of this deployment, a feature selection experiment of logistic regression (LR) on Sonar datasets is completed. LR with MGA-based feature selection uses 46 fewer features and achieves 11.9% higher accuracy.

IEEE Computational Intelligence Society Information

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization—Part I

Scalability of optimization algorithms is a major challenge in coping with the ever-growing size of optimization problems in a wide range of application areas from high-dimensional machine learning to complex large-scale engineering problems. The field…

Scalability of optimization algorithms is a major challenge in coping with the ever-growing size of optimization problems in a wide range of application areas from high-dimensional machine learning to complex large-scale engineering problems. The field of large-scale global optimization is concerned with improving the scalability of global optimization algorithms, particularly, population-based metaheuristics. Such metaheuristics have been successfully applied to continuous, discrete, or combinatorial problems ranging from several thousand dimensions to billions of decision variables. In this two-part survey, we review recent studies in the field of large-scale black-box global optimization to help researchers and practitioners gain a bird’s-eye view of the field, learn about its major trends, and the state-of-the-art algorithms. Part I of the series covers two major algorithmic approaches to large-scale global optimization: 1) problem decomposition and 2) memetic algorithms. Part II of the series covers a range of other algorithmic approaches to large-scale global optimization, describes a wide range of problem areas, and finally, touches upon the pitfalls and challenges of current research and identifies several potential areas for future research.

A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization—Part II

This article is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely, decomposition methods and hybridization methods, such as memeti…

This article is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely, decomposition methods and hybridization methods, such as memetic algorithms and local search. In this part, we focus on sampling and variation operators, approximation and surrogate modeling, initialization methods, and parallelization. We also cover a range of problem areas in relation to large-scale global optimization, such as multiobjective optimization, constraint handling, overlapping components, the component imbalance issue and benchmarks, and applications. The article also includes a discussion on pitfalls and challenges of the current research and identifies several potential areas of future research.

Multisource Heterogeneous User-Generated Contents-Driven Interactive Estimation of Distribution Algorithms for Personalized Search

Personalized search is essentially a complex qualitative optimization problem, and interactive evolutionary algorithms (EAs) have been extended from EAs to adapt to solving it. However, the multisource user-generated contents (UGCs) in the personalized…

Personalized search is essentially a complex qualitative optimization problem, and interactive evolutionary algorithms (EAs) have been extended from EAs to adapt to solving it. However, the multisource user-generated contents (UGCs) in the personalized services have not been concerned on in the adaptation. Accordingly, we here present an enhanced restricted Boltzmann machine (RBM)-driven interactive estimation of distribution algorithms (IEDAs) with multisource heterogeneous data from the viewpoint of effectively extracting users’ preferences and requirements from UGCs to strengthen the performance of IEDA for personalized search. The multisource heterogeneous UGCs, including users’ ratings and reviews, items’ category tags, social networks, and other available information, are sufficiently collected and represented to construct an RBM-based model to extract users’ comprehensive preferences. With this RBM, the probability model for conducting the reproduction operator of estimation of distribution algorithms (EDAs) and the surrogate for quantitatively evaluating an individual (item) fitness are further developed to enhance the EDA-based personalized search. The UGCs-driven IEDA is applied to various publicly released Amazon datasets, e.g., recommendation of Digital Music, Apps for Android, Movies, and TV, to experimentally demonstrate its performance in efficiently improving the IEDA in personalized search with less interactions and higher satisfaction.

Investigating the Correlation Amongst the Objective and Constraints in Gaussian Process-Assisted Highly Constrained Expensive Optimization

Expensive constrained optimization refers to problems where the calculation of the objective and/or constraint functions are computationally intensive due to the involvement of complex physical experiments or numerical simulations. Such expensive probl…

Expensive constrained optimization refers to problems where the calculation of the objective and/or constraint functions are computationally intensive due to the involvement of complex physical experiments or numerical simulations. Such expensive problems can be addressed by Gaussian process-assisted evolutionary algorithms. In many problems, the (single) objective and constraints are correlated to some extent. Unfortunately, existing works based on the Gaussian process for expensive constrained optimization treat the objective and multiple constraints as being statistically independent, typically for the ease of computation. To fill this gap, this article investigates the correlation among the objective and constraints. To be specific, we model the correlation amongst the objective and constraint functions using a multitask Gaussian process prior, and then mathematically derive a constrained expected improvement acquisition function that allows the correlation among the objective and constraints. The correlation between the objective and constraints can be captured and leveraged during the optimization process. The performance of the proposed method is examined on a set of benchmark problems and a real-world antenna design problem. On problems with high correlation amongst the objective and constraints, the experimental results show that leveraging the correlation yields improvements in both the optimization speed and the constraint-handling ability compared with the method that assumes the objective and constraints are statistically independent.

An Enhanced Competitive Swarm Optimizer With Strongly Convex Sparse Operator for Large-Scale Multiobjective Optimization

Sparse multiobjective optimization problems (MOPs) have become increasingly important in many applications in recent years, e.g., the search for lightweight deep neural networks and high-dimensional feature selection. However, little attention has been…

Sparse multiobjective optimization problems (MOPs) have become increasingly important in many applications in recent years, e.g., the search for lightweight deep neural networks and high-dimensional feature selection. However, little attention has been paid to sparse large-scale MOPs, whose Pareto-optimal sets are sparse, i.e., with many decision variables equal to zero. To address this issue, this article proposes an enhanced competitive swarm optimization algorithm assisted by a strongly convex sparse operator (SCSparse). A tricompetition mechanism is introduced into competitive swarm optimization, aiming to strike a better balance between exploration and exploitation. In addition, the SCSparse is embedded in the position updating of the particles to generate sparse solutions. Our simulation results show that the proposed algorithm outperforms the state-of-the-art methods on both sparse test problems and application examples.

An Evolutionary Multiobjective Knee-Based Lower Upper Bound Estimation Method for Wind Speed Interval Forecast

Due to the high variability and uncertainty of the wind speed, an interval forecast can provide more information for decision makers to achieve a better energy management compared to the traditional point forecast. In this article, a knee-based lower u…

Due to the high variability and uncertainty of the wind speed, an interval forecast can provide more information for decision makers to achieve a better energy management compared to the traditional point forecast. In this article, a knee-based lower upper bound estimation method (K-LUBE) is proposed to construct wind speed prediction intervals (PIs). First, we analyze the underlying limitations of traditional direct interval forecast methods, i.e., their obtained PIs often fail to achieve a good balance between the interval width and the coverage probability. K-LUBE resolves the difficulty based on a multiobjective optimization framework in conjunction with a knee selection criterion. Specifically, a PI-NSGA-II multiobjective optimization algorithm is designed to obtain a set of Pareto-optimal solutions. A parameter transfer and a sample training strategies are developed to significantly improve the convergence speed of the optimization procedure. Then, the knee selection criterion is introduced to select the best tradeoff solution among the obtained solutions. In comparison with traditional methods, this method can always provide a reliable PI for decision makers. The procedure is automatic and requires no parameter to be specified in advance, making it more practical for use. The effectiveness of the proposed K-LUBE method is demonstrated through extensive comparisons with four traditional direct interval forecast methods and four classical benchmark models.