An Estimation of Distribution Algorithm Based on Variational Bayesian for Point-Set Registration

Point-set registration is widely used in computer vision and pattern recognition. However, it has become a challenging problem since the current registration algorithms suffer from the complexities of the point-set distributions. To solve this problem,…

Point-set registration is widely used in computer vision and pattern recognition. However, it has become a challenging problem since the current registration algorithms suffer from the complexities of the point-set distributions. To solve this problem, we propose a robust registration algorithm based on the estimation of distribution algorithm (EDA) to optimize the complex distributions from a global search mechanism. We propose an EDA probability model based on the asymmetric generalized Gaussian mixture model, which describes the area in the solution space as comprehensively as possible and constructs a probability model of complex distribution points, especially for missing and outliers. We propose a transformation and a Gaussian evolution strategy in the selection mechanism of EDA to process the deformation, rotation, and denoising of selected dominant individuals. Considering the complexity of the model, we choose to optimize from the perspective of variational Bayesian, and introduce a prior probability distribution through local variation to reinforce the convergence of the algorithm in dealing with complex point sets. In addition, a local search mechanism based on the simulated annealing algorithm is added to realize the coarse-to-fine registration. Experimental results show that our method has the best robustness compared with the state-of-the-art registration algorithms.

Distributed Co-Evolutionary Memetic Algorithm for Distributed Hybrid Differentiation Flowshop Scheduling Problem

This article deals with a practical distributed hybrid differentiation flowshop scheduling problem (DHDFSP) for the first time, where manufacturing products to minimize makespan criterion goes through three consecutive stages: 1) job fabrication in fir…

This article deals with a practical distributed hybrid differentiation flowshop scheduling problem (DHDFSP) for the first time, where manufacturing products to minimize makespan criterion goes through three consecutive stages: 1) job fabrication in first-stage distributed flowshop factories; 2) job-to-product assembly based on specified assembly plan on a second-stage single machine; and 3) product differentiation according to customization on one of the third-stage dedicated machines. Considering the characteristics of multistage and diversified processing technologies of the problem, building new powerful evolutionary algorithm (EA) for DHDFSP is expected. To achieve this, we propose a general EA framework called distributed co-evolutionary memetic algorithm (DCMA). It includes four basic modules: 1) dual population (POP)-based global exploration; 2) elite archive (EAR)-oriented local exploitation; 3) elite knowledge transfer (EKT) among POPs and EAR; and 4) adaptive POP restart. EKT is a general model for information fusion among search agents due to its problem independence. In execution, four modules cooperate with each other and search agents co-evolve in a distributed way. This DCMA evolutionary framework provides some guidance in algorithm construction of different optimization problems. Furthermore, we design each module based on problem knowledge and follow the DCMA framework to propose a specific DCMA metaheuristic for coping with DHDFSP. Computational experiments validate the effectiveness of the DCMA evolutionary framework and its special designs, and show that the proposed DCMA metaheuristic outperforms the compared algorithms.

Evolutionary Search With Multiview Prediction for Dynamic Multiobjective Optimization

Dynamic multiobjective optimization problem (DMOP) denotes the multiobjective optimization problem which varies over time. As changes in DMOP may exist some patterns that are predictable, to solve DMOP, a number of research efforts have been made to de…

Dynamic multiobjective optimization problem (DMOP) denotes the multiobjective optimization problem which varies over time. As changes in DMOP may exist some patterns that are predictable, to solve DMOP, a number of research efforts have been made to develop evolutionary search with prediction approaches to estimate the changes of the problem. A common practice of existing prediction approaches is to predict the change of Pareto-optimal solutions (POS) based on the historical solutions obtained in the decision space. However, the change of a DMOP may occur in both decision and objective spaces. Prediction only in the decision space thus may not be able to give the proper estimation of the problem change. Taking this cue, in this article, we propose an evolutionary search with multiview prediction for solving DMOP. In contrast to existing prediction methods, the proposed approach conducts prediction from the views of both decision and objective spaces. To estimate dynamic changes in DMOP, a kernelized autoencoding model is derived to perform the multiview prediction in a reproducing kernel Hilbert space (RKHS), which holds a closed-form solution. To examine the performance of the proposed method, comprehensive empirical studies on the commonly used DMOP benchmarks, as well as a real-world case study on the movie recommendation problem, are presented. The obtained experimental results verified the efficacy of the proposed method for solving both benchmark and real-world DMOPs.

Memristor Parallel Computing for a Matrix-Friendly Genetic Algorithm

Matrix operation is easy to be paralleled by hardware, and the memristor network can realize a parallel matrix computing model with in-memory computing. This article proposes a matrix-friendly genetic algorithm (MGA), in which the population is represe…

Matrix operation is easy to be paralleled by hardware, and the memristor network can realize a parallel matrix computing model with in-memory computing. This article proposes a matrix-friendly genetic algorithm (MGA), in which the population is represented by a matrix and the evolution of population is realized by matrix operations. Compared with the performance of a baseline genetic algorithm (GA) on solving the maximum value of the binary function, MGA can converge better and faster. In addition, MGA is more efficient because of its parallelism on matrix operations, and MGA runs 2.5 times faster than the baseline GA when using the NumPy library. Considering the advantages of the memristor in matrix operations, memristor circuits are designed for the deployment of MGA. This deployment method realizes the parallelization and in-memory computing (memristor is both memory and computing unit) of MGA. In order to verify the effectiveness of this deployment, a feature selection experiment of logistic regression (LR) on Sonar datasets is completed. LR with MGA-based feature selection uses 46 fewer features and achieves 11.9% higher accuracy.

IEEE Computational Intelligence Society Information

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.

A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization—Part I

Scalability of optimization algorithms is a major challenge in coping with the ever-growing size of optimization problems in a wide range of application areas from high-dimensional machine learning to complex large-scale engineering problems. The field…

Scalability of optimization algorithms is a major challenge in coping with the ever-growing size of optimization problems in a wide range of application areas from high-dimensional machine learning to complex large-scale engineering problems. The field of large-scale global optimization is concerned with improving the scalability of global optimization algorithms, particularly, population-based metaheuristics. Such metaheuristics have been successfully applied to continuous, discrete, or combinatorial problems ranging from several thousand dimensions to billions of decision variables. In this two-part survey, we review recent studies in the field of large-scale black-box global optimization to help researchers and practitioners gain a bird’s-eye view of the field, learn about its major trends, and the state-of-the-art algorithms. Part I of the series covers two major algorithmic approaches to large-scale global optimization: 1) problem decomposition and 2) memetic algorithms. Part II of the series covers a range of other algorithmic approaches to large-scale global optimization, describes a wide range of problem areas, and finally, touches upon the pitfalls and challenges of current research and identifies several potential areas for future research.

A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization—Part II

This article is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely, decomposition methods and hybridization methods, such as memeti…

This article is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely, decomposition methods and hybridization methods, such as memetic algorithms and local search. In this part, we focus on sampling and variation operators, approximation and surrogate modeling, initialization methods, and parallelization. We also cover a range of problem areas in relation to large-scale global optimization, such as multiobjective optimization, constraint handling, overlapping components, the component imbalance issue and benchmarks, and applications. The article also includes a discussion on pitfalls and challenges of the current research and identifies several potential areas of future research.

Multisource Heterogeneous User-Generated Contents-Driven Interactive Estimation of Distribution Algorithms for Personalized Search

Personalized search is essentially a complex qualitative optimization problem, and interactive evolutionary algorithms (EAs) have been extended from EAs to adapt to solving it. However, the multisource user-generated contents (UGCs) in the personalized…

Personalized search is essentially a complex qualitative optimization problem, and interactive evolutionary algorithms (EAs) have been extended from EAs to adapt to solving it. However, the multisource user-generated contents (UGCs) in the personalized services have not been concerned on in the adaptation. Accordingly, we here present an enhanced restricted Boltzmann machine (RBM)-driven interactive estimation of distribution algorithms (IEDAs) with multisource heterogeneous data from the viewpoint of effectively extracting users’ preferences and requirements from UGCs to strengthen the performance of IEDA for personalized search. The multisource heterogeneous UGCs, including users’ ratings and reviews, items’ category tags, social networks, and other available information, are sufficiently collected and represented to construct an RBM-based model to extract users’ comprehensive preferences. With this RBM, the probability model for conducting the reproduction operator of estimation of distribution algorithms (EDAs) and the surrogate for quantitatively evaluating an individual (item) fitness are further developed to enhance the EDA-based personalized search. The UGCs-driven IEDA is applied to various publicly released Amazon datasets, e.g., recommendation of Digital Music, Apps for Android, Movies, and TV, to experimentally demonstrate its performance in efficiently improving the IEDA in personalized search with less interactions and higher satisfaction.

Investigating the Correlation Amongst the Objective and Constraints in Gaussian Process-Assisted Highly Constrained Expensive Optimization

Expensive constrained optimization refers to problems where the calculation of the objective and/or constraint functions are computationally intensive due to the involvement of complex physical experiments or numerical simulations. Such expensive probl…

Expensive constrained optimization refers to problems where the calculation of the objective and/or constraint functions are computationally intensive due to the involvement of complex physical experiments or numerical simulations. Such expensive problems can be addressed by Gaussian process-assisted evolutionary algorithms. In many problems, the (single) objective and constraints are correlated to some extent. Unfortunately, existing works based on the Gaussian process for expensive constrained optimization treat the objective and multiple constraints as being statistically independent, typically for the ease of computation. To fill this gap, this article investigates the correlation among the objective and constraints. To be specific, we model the correlation amongst the objective and constraint functions using a multitask Gaussian process prior, and then mathematically derive a constrained expected improvement acquisition function that allows the correlation among the objective and constraints. The correlation between the objective and constraints can be captured and leveraged during the optimization process. The performance of the proposed method is examined on a set of benchmark problems and a real-world antenna design problem. On problems with high correlation amongst the objective and constraints, the experimental results show that leveraging the correlation yields improvements in both the optimization speed and the constraint-handling ability compared with the method that assumes the objective and constraints are statistically independent.