- 18 readers
- Pages displayed : 247890
- Unique visitors : 83522
- Pages displayed in last 24 hours : 134
- Unique visitors in last 24 hours : 31
Over the past few decades, a plethora of computational intelligence algorithms designed to solve multiobjective problems have been proposed in the literature. Unfortunately, it has been shown that a large majority of these optimizers experience performance degradation when tasked with solving problems possessing more than three objectives, referred to as many-objective problems (MaOPs). The downfall of these optimizers is that simultaneously maintaining a uniformly-spread set of solutions along with appropriate selection pressure to converge toward the Pareto-optimal front becomes significantly difficult as the number of objectives increases. This difficulty is further compounded for large-scale MaOPs, i.e., MaOPs with a large number of decision variables. In this paper, insight is given into the current state of many-objective research by investigating scalability of state-of-the-art algorithms using 3–15 objectives and 30–1000 decision variables. Results indicate that evolutionary optimizers are generally the best performers when the number of decision variables is low, but are outperformed by the swarm intelligence optimizers in several large-scale MaOP instances. However, a recently proposed evolutionary algorithm which combines dominance and subregion-based decomposition is shown to be promising for handling the immense search spaces encountered in large-scale MaOPs.
Studying the search behavior of evolutionary many-objective optimization is an important, but challenging issue. Existing studies rely mainly on the use of performance indicators which, however, not only encounter increasing difficulties with the number of objectives, but also fail to provide the visual information of the evolutionary search. In this paper, we propose a class of scalable test problems, called multiline distance minimization problem (ML-DMP), which are used to visually examine the behavior of many-objective search. Two key characteristics of the ML-DMP problem are: 1) its Pareto optimal solutions lie in a regular polygon in the 2-D decision space and 2) these solutions are similar (in the sense of Euclidean geometry) to their images in the high-dimensional objective space. This allows a straightforward understanding of the distribution of the objective vector set (e.g., its uniformity and coverage over the Pareto front) via observing the solution set in the 2-D decision space. Fifteen well-established algorithms have been investigated on three types of ten ML-DMP problem instances. Weakness has been revealed across classic multiobjective algorithms (such as Pareto-based, decomposition-based, and indicator-based algorithms) and even state-of-the-art algorithms designed especially for many-objective optimization. This, together with some interesting observations from the experimental studies, suggests that the proposed ML-DMP may also be used as a benchmark function to challenge the search ability of optimization algorithms.
Divide-and-conquer (DC) is conceptually well suited to deal with high-dimensional optimization problems by decomposing the original problem into multiple low-dimensional subproblems, and tackling them separately. Nevertheless, the dimensionality mismatch between the original problem and subproblems makes it nontrivial to precisely assess the quality of a candidate solution to a subproblem, which has been a major hurdle for applying the idea of DC to nonseparable high-dimensional optimization problems. In this paper, we suggest that searching a good solution to a subproblem can be viewed as a computationally expensive problem and can be addressed with the aid of meta-models. As a result, a novel approach, namely self-evaluation evolution (SEE) is proposed. Empirical studies have shown the advantages of SEE over four representative compared algorithms increase with the problem size on the CEC2010 large scale global optimization benchmark. The weakness of SEE is also analyzed in the empirical studies.
Interval many-objective optimization problems (IMaOPs), involving more than three objectives and at least one subjected to interval uncertainty, are ubiquitous in real-world applications. However, there have been very few effective methods for solving these problems. In this paper, we proposed a set-based genetic algorithm to effectively solve them. The original optimization problem was first transformed into a deterministic bi-objective problem, where new objectives are hyper-volume and imprecision. A set-based Pareto dominance relation was then defined to modify the fast nondominated sorting approach in NSGA-II. Additionally, set-based evolutionary schemes were suggested. Finally, our method was empirically evaluated on 39 benchmark IMaOPs as well as a car cab design problem and compared with two typical methods. The numerical results demonstrated the superiority of our method and indicated that a tradeoff approximate front between convergence and uncertainty can be produced.
Virtual machine placement (VMP) and energy efficiency are significant topics in cloud computing research. In this paper, evolutionary computing is applied to VMP to minimize the number of active physical servers, so as to schedule underutilized servers to save energy. Inspired by the promising performance of the ant colony system (ACS) algorithm for combinatorial problems, an ACS-based approach is developed to achieve the VMP goal. Coupled with order exchange and migration (OEM) local search techniques, the resultant algorithm is termed an OEMACS. It effectively minimizes the number of active servers used for the assignment of virtual machines (VMs) from a global optimization perspective through a novel strategy for pheromone deposition which guides the artificial ants toward promising solutions that group candidate VMs together. The OEMACS is applied to a variety of VMP problems with differing VM sizes in cloud environments of homogenous and heterogeneous servers. The results show that the OEMACS generally outperforms conventional heuristic and other evolutionary-based approaches, especially on VMP with bottleneck resource characteristics, and offers significant savings of energy and more efficient use of different resources.
Particle Swarm Optimization With a Balanceable Fitness Estimation for Many-Objective Optimization Problems
Recently, it was found that most multiobjective particle swarm optimizers (MOPSOs) perform poorly when tackling many-objective optimization problems (MaOPs). This is mainly because the loss of selection pressure that occurs when updating the swarm. The number of nondominated individuals is substantially increased and the diversity maintenance mechanisms in MOPSOs always guide the particles to explore sparse regions of the search space. This behavior results in the final solutions being distributed loosely in objective space, but far away from the true Pareto-optimal front. To avoid the above scenario, this paper presents a balanceable fitness estimation method and a novel velocity update equation, to compose a novel MOPSO (NMPSO), which is shown to be more effective to tackle MaOPs. Moreover, an evolutionary search is further run on the external archive in order to provide another search pattern for evolution. The DTLZ and WFG test suites with 4–10 objectives are used to assess the performance of NMPSO. Our experiments indicate that NMPSO has superior performance over four current MOPSOs, and over four competitive multiobjective evolutionary algorithms (SPEA2-SDE, NSGA-III, MOEA/DD, and SRA), when solving most of the test problems adopted.
On the Performance Degradation of Dominance-Based Evolutionary Algorithms in Many-Objective Optimization
In the last decade, it has become apparent that the performance of Pareto-dominance-based evolutionary multiobjective optimization algorithms degrades as the number of objective functions of the problem, given by