Minimax mean-variance models for fuzzy portfolio selection

Abstract  This paper discusses fuzzy portfolio selection problem in the situation where each security return belongs to a certain class
of fuzzy variables but the exact fuzzy variable cannot be given. Two credibility-based minimax mean-varia…

Abstract  

This paper discusses fuzzy portfolio selection problem in the situation where each security return belongs to a certain class
of fuzzy variables but the exact fuzzy variable cannot be given. Two credibility-based minimax mean-variance models are proposed.
The crisp equivalents of the models to linear programming ones are given in three special cases. In addition, a general solution
algorithm is also provided. To help understand the modeling idea and to illustrate the effectiveness of the proposed algorithm,
one example is presented.

  • Content Type Journal Article
  • Pages 251-260
  • DOI 10.1007/s00500-010-0654-3
  • Authors
    • Xiaoxia Huang, School of Economics and Management, University of Science and Technology Beijing, Beijing, 100083 China

Memetic algorithms based on local search chains for large scale continuous optimisation problems: MA-SSW-Chains

Abstract  Nowadays, large scale optimisation problems arise as a very interesting field of research, because they appear in many real-world
problems (bio-computing, data mining, etc.). Thus, scalability becomes an essential requirement for m…

Abstract  

Nowadays, large scale optimisation problems arise as a very interesting field of research, because they appear in many real-world
problems (bio-computing, data mining, etc.). Thus, scalability becomes an essential requirement for modern optimisation algorithms.
In a previous work, we presented memetic algorithms based on local search chains. Local search chain concerns the idea that,
at one stage, the local search operator may continue the operation of a previous invocation, starting from the final configuration
reached by this one. Using this technique, it was presented a memetic algorithm, MA-CMA-Chains, using the CMA-ES algorithm
as its local search component. This proposal obtained very good results for continuous optimisation problems, in particular
with medium-size (with up to dimension 50). Unfortunately, CMA-ES scalability is restricted by several costly operations,
thus MA-CMA-Chains could not be successfully applied to large scale problems. In this article we study the scalability of
memetic algorithms based on local search chains, creating memetic algorithms with different local search methods and comparing
them, considering both the error values and the processing cost. We also propose a variation of Solis Wets method, that we call Subgrouping Solis Wets algorithm. This local search method explores, at each step of the algorithm, only a random subset of the variables. This
subset changes after a certain number of evaluations. Finally, we propose a new memetic algorithm based on local search chains
for high dimensionality, MA-SSW-Chains, using the Subgrouping Solis Wets’ algorithm as its local search method. This algorithm is compared with MA-CMA-Chains and different reference algorithms, and
it is shown that the proposal is fairly scalable and it is statistically very competitive for high-dimensional problems.

  • Content Type Journal Article
  • Pages 1-20
  • DOI 10.1007/s00500-010-0647-2
  • Authors
    • Daniel Molina, Department of Computer Languages and Systems, University of Cádiz, Cádiz, Spain
    • Manuel Lozano, Department of Computer Science and Artificial Inteligence, University of Granada, Granada, Spain
    • Ana M. Sánchez, Department of Software Engineering, University of Granada, Granada, Spain
    • Francisco Herrera, Department of Computer Science and Artificial Inteligence, University of Granada, Granada, Spain

Path relinking for large-scale global optimization

Abstract  In this paper we consider the problem of finding a global optimum of a multimodal function applying path relinking. In particular,
we target unconstrained large-scale problems and compare two variants of this methodology: the stati…

Abstract  

In this paper we consider the problem of finding a global optimum of a multimodal function applying path relinking. In particular,
we target unconstrained large-scale problems and compare two variants of this methodology: the static and the evolutionary
path relinking (EvoPR). Both are based on the strategy of creating trajectories of moves passing through high-quality solutions
in order to incorporate their attributes to the explored solutions. Computational comparisons are performed on a test-bed
of 19 global optimization functions previously reported with dimensions ranging from 50 to 1,000, totalizing 95 instances.
Our results show that the EvoPR procedure is competitive with the state-of-the-art methods in terms of the average optimality
gap achieved. Statistical analysis is applied to draw significant conclusions.

  • Content Type Journal Article
  • Pages 1-17
  • DOI 10.1007/s00500-010-0650-7
  • Authors
    • Abraham Duarte, Departamento de Ciencias de la Computación, Universidad Rey Juan Carlos, Madrid, Spain
    • Rafael Martí, Departamento de Estadística e Investigación Operativa, Universidad de Valencia, Valencia, Spain
    • Francisco Gortazar, Departamento de Ciencias de la Computación, Universidad Rey Juan Carlos, Madrid, Spain

An incremental particle swarm for large-scale continuous optimization problems: an example of tuning-in-the-loop (re)design of optimization algorithms

Abstract  The development cycle of high-performance optimization algorithms requires the algorithm designer to make several design decisions.
These decisions range from implementation details to the setting of parameter values for testing in…

Abstract  

The development cycle of high-performance optimization algorithms requires the algorithm designer to make several design decisions.
These decisions range from implementation details to the setting of parameter values for testing intermediate designs. Proper
parameter setting can be crucial for the effective assessment of algorithmic components because a bad parameter setting can
make a good algorithmic component perform poorly. This situation may lead the designer to discard promising components that
just happened to be tested with bad parameter settings. Automatic parameter tuning techniques are being used by practitioners
to obtain peak performance from already designed algorithms. However, automatic parameter tuning also plays a crucial role
during the development cycle of optimization algorithms. In this paper, we present a case study of a tuning-in-the-loop approach
for redesigning a particle swarm-based optimization algorithm for tackling large-scale continuous optimization problems. Rather
than just presenting the final algorithm, we describe the whole redesign process. Finally, we study the scalability behavior
of the final algorithm in the context of this special issue.

  • Content Type Journal Article
  • Pages 1-23
  • DOI 10.1007/s00500-010-0649-0
  • Authors
    • Marco A. Montes de Oca, IRIDIA, CoDE, Université Libre de Bruxelles, Brussels, Belgium
    • Doğan Aydın, Department of Computer Engineering, Ege University, Izmir, Turkey
    • Thomas Stützle, IRIDIA, CoDE, Université Libre de Bruxelles, Brussels, Belgium

Self-adaptive differential evolution algorithm using population size reduction and three strategies

Abstract  Many real-world optimization problems are large-scale in nature. In order to solve these problems, an optimization algorithm
is required that is able to apply a global search regardless of the problems’ particularities. This pape…

Abstract  

Many real-world optimization problems are large-scale in nature. In order to solve these problems, an optimization algorithm
is required that is able to apply a global search regardless of the problems’ particularities. This paper proposes a self-adaptive
differential evolution algorithm, called jDElscop, for solving large-scale optimization problems with continuous variables.
The proposed algorithm employs three strategies and a population size reduction mechanism. The performance of the jDElscop
algorithm is evaluated on a set of benchmark problems provided for the Special Issue on the Scalability of Evolutionary Algorithms
and other Metaheuristics for Large Scale Continuous Optimization Problems. Non-parametric statistical procedures were performed
for multiple comparisons between the proposed algorithm and three well-known algorithms from literature. The results show
that the jDElscop algorithm can deal with large-scale continuous optimization effectively. It also behaves significantly better
than other three algorithms used in the comparison, in most cases.

  • Content Type Journal Article
  • Pages 1-18
  • DOI 10.1007/s00500-010-0644-5
  • Authors
    • Janez Brest, Faculty of Electrical Engineering and Computer Science, University of Maribor, Smetanova ulica 17, 2000 Maribor, Slovenia
    • Mirjam Sepesy Maučec, Faculty of Electrical Engineering and Computer Science, University of Maribor, Smetanova ulica 17, 2000 Maribor, Slovenia

Restart particle swarm optimization with velocity modulation: a scalability test

Abstract  Large scale continuous optimization problems are more relevant in current benchmarks since they are more representative of
real-world problems (bioinformatics, data mining, etc.). Unfortunately, the performance of most of the avail…

Abstract  

Large scale continuous optimization problems are more relevant in current benchmarks since they are more representative of
real-world problems (bioinformatics, data mining, etc.). Unfortunately, the performance of most of the available optimization
algorithms deteriorates rapidly as the dimensionality of the search space increases. In particular, particle swarm optimization
is a very simple and effective method for continuous optimization. Nevertheless, this algorithm usually suffers from unsuccessful
performance on large dimension problems. In this work, we incorporate two new mechanisms into the particle swarm optimization
with the aim of enhancing its scalability. First, a velocity modulation method is applied in the movement of particles in order to guide them within the region of interest. Second, a restarting mechanism avoids the early convergence and redirects the particles to promising areas in the search space. Experiments are
carried out within the scope of this Special Issue to test scalability. The results obtained show that our proposal is scalable
in all functions of the benchmark used, as well as numerically very competitive with regards to other compared optimizers.

  • Content Type Journal Article
  • Pages 1-12
  • DOI 10.1007/s00500-010-0648-1
  • Authors
    • José García-Nieto, Departamento Lenguajes y Ciencias de la Computación, University of Málaga, Campus de Teatinos, E.T.S.I. Informática, 29071 Málaga, Spain
    • Enrique Alba, Departamento Lenguajes y Ciencias de la Computación, University of Málaga, Campus de Teatinos, E.T.S.I. Informática, 29071 Málaga, Spain

Self-adaptive differential evolution with multi-trajectory search for large-scale optimization

Abstract  In this paper, self-adaptive differential evolution (DE) is enhanced by incorporating the JADE mutation strategy and hybridized
with modified multi-trajectory search (MMTS) algorithm (SaDE-MMTS) to solve large-scale continuous opti…

Abstract  

In this paper, self-adaptive differential evolution (DE) is enhanced by incorporating the JADE mutation strategy and hybridized
with modified multi-trajectory search (MMTS) algorithm (SaDE-MMTS) to solve large-scale continuous optimization problems.
The JADE mutation strategy, the “DE/current-to-pbest” which is a variation of the classic “DE/current-to-best”, is used for generating mutant vectors. After the mutation phase, the binomial (uniform) crossover, the exponential crossover
as well as no crossover option are used to generate each pair of target and trial vectors. By utilizing the self-adaptation
in SaDE, both trial vector generation strategies and their associated control parameter values are gradually self-adapted
by learning from their previous experiences in generating promising solutions. Consequently, suitable offspring generation
strategy along with associated parameter settings will be determined adaptively to match different phases of the search process.
MMTS is applied frequently to refine several diversely distributed solutions at different search stages satisfying both the
global and the local search requirement. The initialization of step sizes is also defined by a self-adaption during every
MMTS step. The success rates of both SaDE and the MMTS are determined and compared; consequently, future function evaluations
for both search algorithms are assigned proportionally to their recent past performance. The proposed SaDE-MMTS is employed
to solve the 19 numerical optimization problems in special issue of soft computing on scalability of evolutionary algorithms
for large-scale continuous optimization problems and competitive results are presented.

  • Content Type Journal Article
  • Pages 1-11
  • DOI 10.1007/s00500-010-0645-4
  • Authors
    • Shi-Zheng Zhao, School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave., Singapore, 639798 Singapore
    • Ponnuthurai Nagaratnam Suganthan, School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave., Singapore, 639798 Singapore
    • Swagatam Das, Department of Electronics and Telecommunication, Jadavpur University, Kolkata, India

Role differentiation and malleable mating for differential evolution: an analysis on large-scale optimisation

Abstract  
Differential Evolution is a simple yet powerful algorithm for continuous optimisation problems. Traditionally, its operators combine the information
of randomly chosen vectors of the population. However, four different roles are …

Abstract  

Differential Evolution is a simple yet powerful algorithm for continuous optimisation problems. Traditionally, its operators combine the information
of randomly chosen vectors of the population. However, four different roles are clearly identified from their formulations:
receiving, placing, leading, and correcting vectors. In this work, we propose two mechanisms that emphasise the proper selection of vectors for each role in crossover
and mutation operations: (1) the role differentiation mechanism defines the attributes for which vectors are selected for each role; (2) malleable mating allows placing vectors to adapt their mating trends to ensure some similarity relations with the leading and correcting vectors.
In addition, we propose a new differential evolution approach that combines these two mechanisms. We have performed experiments
on a testbed composed of 19 benchmark functions and five dimensions, ranging from 50 variables to 1,000. Results show that
both mechanisms allow differential evolution to statistically improve its results, and that our proposal becomes competitive
with regard to representative methods for continuous optimisation.

  • Content Type Journal Article
  • Pages 1-18
  • DOI 10.1007/s00500-010-0641-8
  • Authors
    • Carlos García-Martínez, Department of Computing and Numerical Analysis, University of Córdoba, 14071 Córdoba, Spain
    • Francisco J. Rodríguez, Department of Computer Science and Artificial Intelligence (CITIC-UGR), University of Granada, 18071 Granada, Spain
    • Manuel Lozano, Department of Computer Science and Artificial Intelligence (CITIC-UGR), University of Granada, 18071 Granada, Spain

Shuffle or update parallel differential evolution for large-scale optimization

Abstract  This paper proposes a novel algorithm for large-scale optimization problems. The proposed algorithm, namely shuffle or update
parallel differential evolution (SOUPDE) is a structured population algorithm characterized by sub-popula…

Abstract  

This paper proposes a novel algorithm for large-scale optimization problems. The proposed algorithm, namely shuffle or update
parallel differential evolution (SOUPDE) is a structured population algorithm characterized by sub-populations employing a
Differential evolution logic. The sub-populations quickly exploit some areas of the decision space, thus drastically and quickly
reducing the fitness value in the highly multi-variate fitness landscape. New search logics are introduced into the sub-population
functioning in order to avoid a diversity loss and thus premature convergence. Two simple mechanisms have been integrated
in order to pursue this aim. The first, namely shuffling, consists of randomly rearranging the individuals over the sub-populations.
The second consists of updating all the scale factors of the sub-populations. The proposed algorithm has been run on a set
of various test problems for five levels of dimensionality and then compared with three popular meta-heuristics. Rigorous
statistical and scalability analyses are reported in this article. Numerical results show that the proposed approach significantly
outperforms the meta-heuristics considered in the benchmark and has a good performance despite the high dimensionality of
the problems. The proposed algorithm balances well between exploitation and exploration and succeeds to have a good performance
over the various dimensionality values and test problems present in the benchmark. It succeeds at outperforming the reference
algorithms considered in this study. In addition, the scalability analysis proves that with respect to a standard Differential
Evolution, the proposed SOUPDE algorithm enhances its performance while the dimensionality grows.

  • Content Type Journal Article
  • Pages 1-19
  • DOI 10.1007/s00500-010-0640-9
  • Authors
    • Matthieu Weber, Department of Mathematical Information Technology, University of Jyväskylä, P.O. Box 35, 40014 Agora, Finland
    • Ferrante Neri, Department of Mathematical Information Technology, University of Jyväskylä, P.O. Box 35, 40014 Agora, Finland
    • Ville Tirronen, Department of Mathematical Information Technology, University of Jyväskylä, P.O. Box 35, 40014 Agora, Finland

Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems

Abstract  This paper presents a novel algorithm based on generalized opposition-based learning (GOBL) to improve the performance of
differential evolution (DE) to solve high-dimensional optimization problems efficiently. The proposed approac…

Abstract  

This paper presents a novel algorithm based on generalized opposition-based learning (GOBL) to improve the performance of
differential evolution (DE) to solve high-dimensional optimization problems efficiently. The proposed approach, namely GODE,
employs similar schemes of opposition-based DE (ODE) for opposition-based population initialization and generation jumping
with GOBL. Experiments are conducted to verify the performance of GODE on 19 high-dimensional problems with D = 50, 100, 200, 500, 1,000. The results confirm that GODE outperforms classical DE, real-coded CHC (crossgenerational elitist
selection, heterogeneous recombination, and cataclysmic mutation) and G-CMA-ES (restart covariant matrix evolutionary strategy)
on the majority of test problems.

  • Content Type Journal Article
  • Pages 1-14
  • DOI 10.1007/s00500-010-0642-7
  • Authors
    • Hui Wang, State Key Laboratory of Software Engineering, Wuhan University, Wuhan, 430072 China
    • Zhijian Wu, State Key Laboratory of Software Engineering, Wuhan University, Wuhan, 430072 China
    • Shahryar Rahnamayan, Faculty of Engineering and Applied Science, University of Ontario Institute of Technology (UOIT), 2000 Simcoe Street North, Oshawa, ON L1H 7K4, Canada