Genetic programming for medical classification: a program simplification approach

Abstract  This paper describes a genetic programming (GP) approach to medical data classification problems. In this approach, the evolved
genetic programs are simplified online during the evolutionary process using algebraic simplification rules, algebraic equivalence
and prime techniques. The new simplification GP approach is examined and compared to the standard GP approach on two medical
data classification problems. The results suggest that the new simplification GP approach can not only be more efficient with
slightly better classification performance than the basic GP system on these problems, but also significantly reduce the sizes
of evolved programs. Comparison with other methods including decision trees, naive Bayes, nearest neighbour, nearest centroid,
and neural networks suggests that the new GP approach achieved superior results to almost all of these methods on these problems.
The evolved genetic programs are also easier to interpret than the “hidden patterns” discovered by the other methods.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9059-9
  • Authors
    • Mengjie Zhang, Victoria University of Wellington School of Mathematics, Statistics and Computer Science P.O. Box 600 Wellington New Zealand
    • Phillip Wong, Victoria University of Wellington School of Mathematics, Statistics and Computer Science P.O. Box 600 Wellington New Zealand

Abstract  This paper describes a genetic programming (GP) approach to medical data classification problems. In this approach, the evolved
genetic programs are simplified online during the evolutionary process using algebraic simplification rules, algebraic equivalence
and prime techniques. The new simplification GP approach is examined and compared to the standard GP approach on two medical
data classification problems. The results suggest that the new simplification GP approach can not only be more efficient with
slightly better classification performance than the basic GP system on these problems, but also significantly reduce the sizes
of evolved programs. Comparison with other methods including decision trees, naive Bayes, nearest neighbour, nearest centroid,
and neural networks suggests that the new GP approach achieved superior results to almost all of these methods on these problems.
The evolved genetic programs are also easier to interpret than the “hidden patterns” discovered by the other methods.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9059-9
  • Authors
    • Mengjie Zhang, Victoria University of Wellington School of Mathematics, Statistics and Computer Science P.O. Box 600 Wellington New Zealand
    • Phillip Wong, Victoria University of Wellington School of Mathematics, Statistics and Computer Science P.O. Box 600 Wellington New Zealand

Learning classifier systems: then and now

Abstract  Broadly conceived as computational models of cognition and tools for modeling complex adaptive systems, later extended for
use in adaptive robotics, and today also applied to effective classification and data-mining–what has happ…

Abstract  Broadly conceived as computational models of cognition and tools for modeling complex adaptive systems, later extended for
use in adaptive robotics, and today also applied to effective classification and data-mining–what has happened to learning
classifier systems in the last decade? This paper addresses this question by examining the current state of learning classifier
system research.

  • Content Type Journal Article
  • DOI 10.1007/s12065-007-0003-3
  • Authors
    • Pier Luca Lanzi, Politecnico di Milano Dipartimento di Elettronica e Informazione P.za L. da Vinci 32 20133 Milan Italy

Editorial introduction

Editorial introduction

  • Content Type Journal Article
  • DOI 10.1007/s10710-007-9054-6
  • Authors
    • Wolfgang Banzhaf, Memorial University of Newfoundland Department of Computer Science St. John’s NL Canada AlB 3X5

Editorial introduction

  • Content Type Journal Article
  • DOI 10.1007/s10710-007-9054-6
  • Authors
    • Wolfgang Banzhaf, Memorial University of Newfoundland Department of Computer Science St. John’s NL Canada AlB 3X5

Acknowledgment

Acknowledgment

  • Content Type Journal Article
  • DOI 10.1007/s10710-007-9055-5

Acknowledgment

  • Content Type Journal Article
  • DOI 10.1007/s10710-007-9055-5

Genetic fuzzy systems: taxonomy, current research trends and prospects

Abstract  The use of genetic algorithms for designing fuzzy systems provides them with the learning and adaptation capabilities and
is called genetic fuzzy systems (GFSs). This topic has attracted considerable attention in the Computation In…

Abstract  The use of genetic algorithms for designing fuzzy systems provides them with the learning and adaptation capabilities and
is called genetic fuzzy systems (GFSs). This topic has attracted considerable attention in the Computation Intelligence community
in the last few years. This paper gives an overview of the field of GFSs, being organized in the following four parts: (a)
a taxonomy proposal focused on the fuzzy system components involved in the genetic learning process; (b) a quick snapshot
of the GFSs status paying attention to the pioneer GFSs contributions, showing the GFSs visibility at ISI Web of Science including the most cited papers and pointing out the milestones covered by the books and the special issues in the topic;
(c) the current research lines together with a discussion on critical considerations of the recent developments; and (d) some
potential future research directions.

  • Content Type Journal Article
  • DOI 10.1007/s12065-007-0001-5
  • Authors
    • Francisco Herrera, University of Granada Department of Computer Science and Artificial Intelligence 18071 Granada Spain

Neuroevolution: from architectures to learning

Abstract  Artificial neural networks (ANNs) are applied to many real-world problems, ranging from pattern classification to robot control.
In order to design a neural network for a particular task, the choice of an architecture (including th…

Abstract  Artificial neural networks (ANNs) are applied to many real-world problems, ranging from pattern classification to robot control.
In order to design a neural network for a particular task, the choice of an architecture (including the choice of a neuron
model), and the choice of a learning algorithm have to be addressed. Evolutionary search methods can provide an automatic
solution to these problems. New insights in both neuroscience and evolutionary biology have led to the development of increasingly
powerful neuroevolution techniques over the last decade. This paper gives an overview of the most prominent methods for evolving
ANNs with a special focus on recent advances in the synthesis of learning architectures.

  • Content Type Journal Article
  • DOI 10.1007/s12065-007-0002-4
  • Authors
    • Dario Floreano, Ecole Polytechnique Fédérale de Lausanne Laboratory of Intelligent Systems Station 11 1015 Lausanne Switzerland
    • Peter Dürr, Ecole Polytechnique Fédérale de Lausanne Laboratory of Intelligent Systems Station 11 1015 Lausanne Switzerland
    • Claudio Mattiussi, Ecole Polytechnique Fédérale de Lausanne Laboratory of Intelligent Systems Station 11 1015 Lausanne Switzerland

An interdisciplinary perspective on artificial immune systems

Abstract  This review paper attempts to position the area of Artificial Immune Systems (AIS) in a broader context of interdisciplinary
research. We review AIS based on an established conceptual framework that encapsulates mathematical and co…

Abstract  This review paper attempts to position the area of Artificial Immune Systems (AIS) in a broader context of interdisciplinary
research. We review AIS based on an established conceptual framework that encapsulates mathematical and computational modelling
of immunology, abstraction and then development of engineered systems. We argue that AIS are much more than engineered systems
inspired by the immune system and that there is a great deal for both immunology and engineering to learn from each other
through working in an interdisciplinary manner.

  • Content Type Journal Article
  • DOI 10.1007/s12065-007-0004-2
  • Authors
    • J. Timmis, University of York Department of Computer Science and Department of Electronics Heslington, York YO10 5DD UK
    • P. Andrews, University of York Department of Computer Science Heslington, York YO10 5DD UK
    • N. Owens, University of York Department of Electronics Heslington, York YO10 5DD UK
    • E. Clark, University of York Department of Computer Science Heslington, York YO10 5DD UK

Dedication: Dr. Lawrence J. Fogel (1928–2007)

Dedication: Dr. Lawrence J. Fogel (1928–2007)
Content Type Journal ArticleDOI 10.1007/s12065-007-0006-0Authors
Larry Bull, University of the West of England Frenchay Bristol UK

Journal Evolutionary Intelligence Online ISSN 1864-5917Pri…

Dedication: Dr. Lawrence J. Fogel (1928–2007)

  • Content Type Journal Article
  • DOI 10.1007/s12065-007-0006-0
  • Authors
    • Larry Bull, University of the West of England Frenchay Bristol UK

Foreword

Foreword
Content Type Journal ArticleDOI 10.1007/s12065-007-0005-1Authors
Larry Bull, University of the West of England Frenchay Bristol UK

Journal Evolutionary Intelligence Online ISSN 1864-5917Print ISSN 1864-5909

Journal Volume Vo…

Foreword

  • Content Type Journal Article
  • DOI 10.1007/s12065-007-0005-1
  • Authors
    • Larry Bull, University of the West of England Frenchay Bristol UK

Sporadic model building for efficiency enhancement of the hierarchical BOA

Abstract  Efficiency enhancement techniques—such as parallelization and hybridization—are among the most important ingredients of practical
applications of genetic and evolutionary algorithms and that is why this research area represents an important niche of evolutionary
computation. This paper describes and analyzes sporadic model building, which can be used to enhance the efficiency of the hierarchical Bayesian optimization algorithm (hBOA) and other estimation
of distribution algorithms (EDAs) that use complex multivariate probabilistic models. With sporadic model building, the structure
of the probabilistic model is updated once in every few iterations (generations), whereas in the remaining iterations, only
model parameters (conditional and marginal probabilities) are updated. Since the time complexity of updating model parameters
is much lower than the time complexity of learning the model structure, sporadic model building decreases the overall time
complexity of model building. The paper shows that for boundedly difficult nearly decomposable and hierarchical optimization
problems, sporadic model building leads to a significant model-building speedup, which decreases the asymptotic time complexity of model building in hBOA by a factor of

\Uptheta(n0.26)

to

\Uptheta(n0.5),

where n is the problem size. On the other hand, sporadic model building also increases the number of evaluations until convergence;
nonetheless, if model building is the bottleneck, the evaluation slowdown is insignificant compared to the gains in the asymptotic complexity of model building. The paper also presents a dimensional
model to provide a heuristic for scaling the structure-building period, which is the only parameter of the proposed sporadic
model-building approach. The paper then tests the proposed method and the rule for setting the structure-building period on
the problem of finding ground states of 2D and 3D Ising spin glasses.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-007-9052-8
  • Authors
    • Martin Pelikan, University of Missouri in St. Louis Missouri Estimation of Distribution Algorithms Laboratory, 321 CCB, Department of Mathematics and Computer Science One University Blvd. St. Louis MO 63121 USA
    • Kumara Sastry, University of Illinois at Urbana-Champaign Illinois Genetic Algorithms Laboratory, 117 TB, Department of Industrial and Enterprise Systems Engineering 104 S. Mathews Ave. Urbana IL 61801 USA
    • David E. Goldberg, University of Illinois at Urbana-Champaign Illinois Genetic Algorithms Laboratory, 117 TB, Department of Industrial and Enterprise Systems Engineering 104 S. Mathews Ave. Urbana IL 61801 USA

Abstract  Efficiency enhancement techniques—such as parallelization and hybridization—are among the most important ingredients of practical
applications of genetic and evolutionary algorithms and that is why this research area represents an important niche of evolutionary
computation. This paper describes and analyzes sporadic model building, which can be used to enhance the efficiency of the hierarchical Bayesian optimization algorithm (hBOA) and other estimation
of distribution algorithms (EDAs) that use complex multivariate probabilistic models. With sporadic model building, the structure
of the probabilistic model is updated once in every few iterations (generations), whereas in the remaining iterations, only
model parameters (conditional and marginal probabilities) are updated. Since the time complexity of updating model parameters
is much lower than the time complexity of learning the model structure, sporadic model building decreases the overall time
complexity of model building. The paper shows that for boundedly difficult nearly decomposable and hierarchical optimization
problems, sporadic model building leads to a significant model-building speedup, which decreases the asymptotic time complexity of model building in hBOA by a factor of


\Uptheta(n0.26)

to


\Uptheta(n0.5),

where n is the problem size. On the other hand, sporadic model building also increases the number of evaluations until convergence;
nonetheless, if model building is the bottleneck, the evaluation slowdown is insignificant compared to the gains in the asymptotic complexity of model building. The paper also presents a dimensional
model to provide a heuristic for scaling the structure-building period, which is the only parameter of the proposed sporadic
model-building approach. The paper then tests the proposed method and the rule for setting the structure-building period on
the problem of finding ground states of 2D and 3D Ising spin glasses.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-007-9052-8
  • Authors
    • Martin Pelikan, University of Missouri in St. Louis Missouri Estimation of Distribution Algorithms Laboratory, 321 CCB, Department of Mathematics and Computer Science One University Blvd. St. Louis MO 63121 USA
    • Kumara Sastry, University of Illinois at Urbana-Champaign Illinois Genetic Algorithms Laboratory, 117 TB, Department of Industrial and Enterprise Systems Engineering 104 S. Mathews Ave. Urbana IL 61801 USA
    • David E. Goldberg, University of Illinois at Urbana-Champaign Illinois Genetic Algorithms Laboratory, 117 TB, Department of Industrial and Enterprise Systems Engineering 104 S. Mathews Ave. Urbana IL 61801 USA