A Novel Training Protocol for Performance Predictors of Evolutionary Neural Architecture Search Algorithms

Evolutionary neural architecture search (ENAS) can automatically design the architectures of deep neural networks (DNNs) using evolutionary computation algorithms. However, most ENAS algorithms require an intensive computational resource, which is not …

Evolutionary neural architecture search (ENAS) can automatically design the architectures of deep neural networks (DNNs) using evolutionary computation algorithms. However, most ENAS algorithms require an intensive computational resource, which is not necessarily available to the users interested. Performance predictors are a type of regression models which can assist to accomplish the search, while without exerting much computational resource. Despite various performance predictors have been designed, they employ the same training protocol to build the regression models: 1) sampling a set of DNNs with performance as the training dataset; 2) training the model with the mean square error criterion; and 3) predicting the performance of DNNs newly generated during the ENAS. In this article, we point out that the three steps constituting the training protocol are not well thought-out through intuitive and illustrative examples. Furthermore, we propose a new training protocol to address these issues, consisting of designing a pairwise ranking indicator to construct the training target, proposing to use the logistic regression to fit the training samples, and developing a differential method to build the training instances. To verify the effectiveness of the proposed training protocol, four widely used regression models in the field of machine learning have been chosen to perform the comparisons on two benchmark datasets. The experimental results of all the comparisons demonstrate that the proposed training protocol can significantly improve the performance prediction accuracy against the traditional training protocols.

Correlation Coefficient-Based Recombinative Guidance for Genetic Programming Hyperheuristics in Dynamic Flexible Job Shop Scheduling

Dynamic flexible job shop scheduling (JSS) is a challenging combinatorial optimization problem due to its complex environment. In this problem, machine assignment and operation sequencing decisions need to be made simultaneously under the dynamic envir…

Dynamic flexible job shop scheduling (JSS) is a challenging combinatorial optimization problem due to its complex environment. In this problem, machine assignment and operation sequencing decisions need to be made simultaneously under the dynamic environments. Genetic programming (GP), as a hyperheuristic approach, has been successfully used to evolve scheduling heuristics for dynamic flexible JSS. However, in traditional GP, recombination between parents may disrupt the beneficial building blocks by choosing the crossover points randomly. This article proposes a recombinative mechanism to provide guidance for GP to realize effective and adaptive recombination for parents to produce offspring. Specifically, we define a novel measure for the importance of each subtree of an individual, and the importance information is utilized to decide the crossover points. The proposed recombinative guidance mechanism attempts to improve the quality of offspring by preserving the promising building blocks of one parent and incorporating good building blocks from the other. The proposed algorithm is examined on six scenarios with different configurations. The results show that the proposed algorithm significantly outperforms the state-of-the-art algorithms on most tested scenarios, in terms of both final test performance and convergence speed. In addition, the rules obtained by the proposed algorithm have good interpretability.

Enhanced Constraint Handling for Reliability-Constrained Multiobjective Testing Resource Allocation

The multiobjective testing resource allocation problem (MOTRAP) is how to efficiently allocate the finite testing time to various modules, with the aim of optimizing system reliability, testing cost, and testing time simultaneously. To deal with this p…

The multiobjective testing resource allocation problem (MOTRAP) is how to efficiently allocate the finite testing time to various modules, with the aim of optimizing system reliability, testing cost, and testing time simultaneously. To deal with this problem, a common approach is to use multiobjective evolutionary algorithms (MOEAs) to seek a set of tradeoff solutions between the three objectives. However, such a tradeoff set may contain a substantial proportion of solutions with very low reliability level, which consume lots of computational resources but may be valueless to the software project manager. In this article, a MOTRAP model with a prespecified reliability is first proposed. Then, new lower bounds on the testing time invested in different modules are theoretically deduced from the necessary condition for the achievement of the given reliability, based on which an exact algorithm for determining the new lower bounds is presented. Moreover, several enhanced constraint-handling techniques (ECHTs) derived from the new bounds are successively developed to be combined with MOEAs to correct and reduce the constraint violation. Finally, the proposed ECHTs are evaluated in comparison with various state-of-the-art constraint-solving approaches. The comparative results demonstrate that the proposed ECHTs can work well with MOEAs, make the search focus on the feasible region of the prespecified reliability, and provide the software project manager with better and more diverse, satisfactory choices in test planning.

Multisource Neighborhood Immune Detector Adaptive Model for Anomaly Detection

The artificial immune system (AIS) is one of the important branches of artificial intelligence technology, and it is widely used in many fields. The detector set is the core knowledge set, and the AIS application effects are mainly determined by the ge…

The artificial immune system (AIS) is one of the important branches of artificial intelligence technology, and it is widely used in many fields. The detector set is the core knowledge set, and the AIS application effects are mainly determined by the generation, evolution, and detection of the detectors. Presently, the problem space (shape-space) of AIS mainly applied real-valued representation. But the real-valued detectors have some problems that have not been solved well, such as slow convergence speed of generation, holes in the nonself region, detector overlapping redundancy, dimension curse, etc., which lead to the unsatisfactory detection effects. Moreover, artificial immune anomaly detection is a dynamic adaptive model, needs to be evolved adaptively with the detection environments. Without better adaptive modeling, these problems mentioned before will get worse. In view of this, this article proposes a multisource immune detector adaptive model in neighborhood shape-space and applies it to anomaly detection: based on random, chaotic map and DNA genetic algorithm (DNA-GA), multisource neighborhood negative selection algorithm (MSNNSA), multisource neighborhood immune detector generation algorithm (MS-NIDGA), and neighborhood immune anomaly detection algorithm (NIADA) are proposed, so that the generation and detection of immune detectors can be improved efficiently; introducing immune adaptive and feedback mechanism, multisource neighborhood immune detector adaptive model (MS-NIDAM) is built, so that the detectors can be adaptively evolved in a more targeted search domain, and keep better distribution to the nonself region in real time, so as to solve various problems existing in the real-valued shape-space under dynamic environment mentioned before and improve the overall detection performances. The experimental results show that MS-NIDAM can improve the detector generation/evolution efficiency, keep the up-to-date understanding of the changing e-
vironment, so as to obtain better overall detection performances and stability than other comparative methods.

Learnable Evolutionary Search Across Heterogeneous Problems via Kernelized Autoencoding

The design of the evolutionary algorithm with learning capability from past search experiences has attracted growing research interests in recent years. It has been demonstrated that the knowledge embedded in the past search experience can greatly spee…

The design of the evolutionary algorithm with learning capability from past search experiences has attracted growing research interests in recent years. It has been demonstrated that the knowledge embedded in the past search experience can greatly speed up the evolutionary process if properly harnessed. Autoencoding evolutionary search (AEES) is a recently proposed search paradigm, which employs a single-layer denoising autoencoder to build the mapping between two problems by configuring the solutions of each problem as the input and output for the autoencoder, respectively. The learned mapping makes it possible to perform knowledge transfer across heterogeneous problem domains with diverse properties. It has shown a promising performance of learning and transferring the knowledge from past search experiences to facilitate the evolutionary search on a variety of optimization problems. However, despite the success enjoyed by AEES, the linear autoencoding model cannot capture the nonlinear relationship between the solution sets used in the mapping construction. Taking this cue, in this article, we devise a kernelized autoencoder to construct the mapping in a reproducing kernel Hilbert space (RKHS), where the nonlinearity among problem solutions can be captured easily. Importantly, the proposed kernelized autoencoding method also holds a closed-form solution which will not bring much computational burden in the evolutionary search. Furthermore, a kernelized autoencoding evolutionary-search (KAES) paradigm is proposed that adaptively selects the linear and kernelized autoencoding along the search process in pursuit of effective knowledge transfer across problem domains. To validate the efficacy of the proposed KAES, comprehensive empirical studies on both benchmark multiobjective optimization problems as well as real-world vehicle crashworthiness design problem are presented.

Few-Shots Parallel Algorithm Portfolio Construction via Co-Evolution

Generalization, i.e., the ability of solving problem instances that are not available during the system design and development phase, is a critical goal for intelligent systems. A typical way to achieve good generalization is to learn a model from vast…

Generalization, i.e., the ability of solving problem instances that are not available during the system design and development phase, is a critical goal for intelligent systems. A typical way to achieve good generalization is to learn a model from vast data. In the context of heuristic search, such a paradigm could be implemented as configuring the parameters of a parallel algorithm portfolio (PAP) based on a set of “training” problem instances, which is often referred to as PAP construction. However, compared to the traditional machine learning, PAP construction often suffers from the lack of training instances, and the obtained PAPs may fail to generalize well. This article proposes a novel competitive co-evolution scheme, named co-evolution of parameterized search (CEPS), as a remedy to this challenge. By co-evolving a configuration population and an instance population, CEPS is capable of obtaining generalizable PAPs with few training instances. The advantage of CEPS in improving generalization is analytically shown in this article. Two concrete algorithms, namely, CEPS-TSP and CEPS-VRPSPDTW, are presented for the traveling salesman problem (TSP) and the vehicle routing problem with simultaneous pickup–delivery and time windows (VRPSPDTW), respectively. The experimental results show that CEPS has led to better generalization, and even managed to find new best-known solutions for some instances.

Table of contents

Presents the table of contents for this issue of this publication.

Presents the table of contents for this issue of this publication.

MMES: Mixture Model-Based Evolution Strategy for Large-Scale Optimization

This work provides an efficient sampling method for the covariance matrix adaptation evolution strategy (CMA-ES) in large-scale settings. In contract to the Gaussian sampling in CMA-ES, the proposed method generates mutation vectors from a mixture mode…

This work provides an efficient sampling method for the covariance matrix adaptation evolution strategy (CMA-ES) in large-scale settings. In contract to the Gaussian sampling in CMA-ES, the proposed method generates mutation vectors from a mixture model, which facilitates exploiting the rich variable correlations of the problem landscape within a limited time budget. We analyze the probability distribution of this mixture model and show that it approximates the Gaussian distribution of CMA-ES with a controllable accuracy. We use this sampling method, coupled with a novel method for mutation strength adaptation, to formulate the mixture model-based evolution strategy (MMES)—a CMA-ES variant for large-scale optimization. The numerical simulations show that, while significantly reducing the time complexity of CMA-ES, MMES preserves the rotational invariance, is scalable to high dimensional problems, and is competitive against the state-of-the-arts in performing global optimization.