Abstract Efficiency enhancement techniques—such as parallelization and hybridization—are among the most important ingredients of practical
applications of genetic and evolutionary algorithms and that is why this research area represents an important niche of evolutionary
computation. This paper describes and analyzes
sporadic model building, which can be used to enhance the efficiency of the hierarchical Bayesian optimization algorithm (hBOA) and other estimation
of distribution algorithms (EDAs) that use complex multivariate probabilistic models. With sporadic model building, the structure
of the probabilistic model is updated once in every few iterations (generations), whereas in the remaining iterations, only
model parameters (conditional and marginal probabilities) are updated. Since the time complexity of updating model parameters
is much lower than the time complexity of learning the model structure, sporadic model building decreases the overall time
complexity of model building. The paper shows that for boundedly difficult nearly decomposable and hierarchical optimization
problems, sporadic model building leads to a significant
model-building speedup, which decreases the asymptotic time complexity of model building in hBOA by a factor of
to
where n is the problem size. On the other hand, sporadic model building also increases the number of evaluations until convergence;
nonetheless, if model building is the bottleneck, the evaluation slowdown is insignificant compared to the gains in the asymptotic complexity of model building. The paper also presents a dimensional
model to provide a heuristic for scaling the structure-building period, which is the only parameter of the proposed sporadic
model-building approach. The paper then tests the proposed method and the rule for setting the structure-building period on
the problem of finding ground states of 2D and 3D Ising spin glasses.