New issue of SIGEVOlution Newsletter is online

SIGEVOlution Volume 3 Issue 1 is now available online at http://www.sigevolution.org. SIGEVOlution is the newsletter of ACM SIGEVO, the ACM Special Interest Group for Genetic and Evolutionary Computation. See the announcement at IlliGAL Blogging.

SIGEVOlution Volume 3 Issue 1 is now available online at http://www.sigevolution.org. SIGEVOlution is the newsletter of ACM SIGEVO, the ACM Special Interest Group for Genetic and Evolutionary Computation. See the announcement at IlliGAL Blogging.

Evolutionary Computation in Conceptual Clustering and Tagging

Abstract: The Web 2.0 technologies provide users with collaborative work-spaces over the Internet. For example, Wikipedia is an open source encyclopedia that anyone can edit articles. YouTube provides spaces where users can share videos and annotations about them. Users can put images on Flickers and collaborate each other by categorizing with tagging. These contents are created […]

Abstract: The Web 2.0 technologies provide users with collaborative work-spaces over the Internet. For example, Wikipedia is an open source encyclopedia that anyone can edit articles. YouTube provides spaces where users can share videos and annotations about them. Users can put images on Flickers and collaborate each other by categorizing with tagging. These contents are created by users’ voluntary activities, which is one of the features for the Web 2.0 technology. Some services based on the Web 2.0 have well organized text-based contents on them. For example, Wikipedia has well structured contents, which is due to voluntary activities of the users trying to edit each contents so as to be sophisticated. On the other hands, other services, such as YouTube and Flickers’, only have short sentences or small number of words as annotations. Additionally these annotations are usually different according to each user because participants are not in situation of collaborations. As a result, annotations do not have meaning for videos and pictures. A system that converts annotations into meaningful texts is useful because building texts requires resources while a number of annotations are available.The purpose of this thesis is development of the text builder which is based on the Web 2.0 technology with genetic algorithms and natural language processing. A human interactions system is developed in this thesis for automatically building meaningful tags from annotations. The system consists of mainly two parts: a conceptual clustering component based on natural language processing and a sentence creating component based on genetic algorithms. The conceptual clustering component decomposes annotations into phrases with main ideas. Then, the sentence creating component builds several tags from the phrases. Thirdly created tags are evaluated by users to find better explanations for videos and pictures. Participants are supposed to collaborate through evaluations to create more organized and meaningful tags.The developed system succeed in creating tags from annotations without structures through user-machine interactions. This system is applicable to other systems which has only annotations as participants’ comments.  Because tags created by this system have meanings but short length a system building longer text as sentences is left as future works.

An Analysis of Matching in Learning Classifier Systems

Abstract: We investigate rule matching in learning classifier systems for problems involving binary and real inputs. We consider three rule encodings: the widely used character-based encoding, a specificity-based encoding, and a binary encoding used in Alecsys. We compare the performance of the three algorithms both on matching alone and on typical test problems. The results on […]

Abstract: We investigate rule matching in learning classifier systems for problems involving binary and real inputs. We consider three rule encodings: the widely used character-based encoding, a specificity-based encoding, and a binary encoding used in Alecsys. We compare the performance of the three algorithms both on matching alone and on typical test problems. The results on matching alone show that the population generality influences the performance of the matching algorithms based on string representations in different ways. Character-based encoding becomes slower and slower and generality increases, specificity-based encoding becomes faster and faster as generality increases. The results on typical test problems show that the specificity-based representation can halve the time require for matching but also that binary encoding is about ten times faster on the most difficult problems. Moreover, we extend specificity-based encoding to real-inputs and propose an algorithm that can halve the time require for matching real inputs using an interval-based representation. 

Investigating Restricted Tournament Replacement in ECGA for Non-Stationary Environments

Abstract: This paper investigates the incorporation of restricted tournament replacement (RTR) in the extended compact genetic algorithm (ECGA) for solving problems with non-stationary optima. RTR is a simple yet efficient niching method used to maintain diversity in a population of individuals. While the original version of RTR uses Hamming distance to quantify similarity between individuals, […]

Abstract: This paper investigates the incorporation of restricted tournament replacement (RTR) in the extended compact genetic algorithm (ECGA) for solving problems with non-stationary optima. RTR is a simple yet efficient niching method used to maintain diversity in a population of individuals. While the original version of RTR uses Hamming distance to quantify similarity between individuals, we propose an alternative substructural distance to enforce the niches. The ECGA that restarts the search after a change of environment is compared with the approach of maintaining diversity, using both versions of RTR. Results on several dynamic decomposable test problems demonstrate the usefulness of maintaining diversity throughout the run over the approach of restarting the search from scratch at each change. Furthermore, by maintaining diversity no additional mechanisms are required to detect the change of environment, which is typically a problem-dependent and non-trivial task.

Self-Adaptive Mutation in XCSF

Abstract: Recent advances in XCS technology have shown that self-adaptive mutation can be highly useful to speed-up the evolutionary progress in XCS. Moreover, recent publications have shown that XCS can also be successfully applied to challenging real-valued domains including datamining, function approximation, and clustering. In this paper, we combine these two advances and investigate self-adaptive mutation […]

Abstract: Recent advances in XCS technology have shown that self-adaptive mutation can be highly useful to speed-up the evolutionary progress in XCS. Moreover, recent publications have shown that XCS can also be successfully applied to challenging real-valued domains including datamining, function approximation, and clustering. In this paper, we combine these two advances and investigate self-adaptive mutation in the XCS system for function approximation with hyperellipsoidal condition structures, referred to as XCSF in this paper. It has been shown that XCSF solves function approximation problems with an accuracy, noise robustness, and generalization capability comparable to other statistical machine learning techniques and that XCSF outperforms simple clustering techniques to which linear approximations are added. This paper shows that the right type of self-adaptive mutation can further improve XCSF’s performance solving problems more parameter independent and more reliably. We analyze various types of self-adaptive mutation and show that XCSF with self-adaptive mutation ranges, differentiated for the separate classifier condition values, yields most robust performance results. Future work may further investigate the properties of the self-adaptive values and may integrate advanced self-adaptation techniques. 

An eigen analysis of the GP community

Abstract  The coauthorship and coeditorship relations as recorded in the genetic programming bibliography provide a quantitative view
of the GP community. Eigen analysis is used to find the principle components of the community. It shows the major eigenvalues
and eigenvectors are responsible for 70% of the connection graph. Top eigen authors are given.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9060-3
  • Authors
    • W. B. Langdon, University of Essex Departments of Biological and Mathematical Sciences Colchester UK
    • R. Poli, University of Essex Department of Computing and Electronic Systems Colchester UK
    • W. Banzhaf, Memorial University of Newfoundland Department of Computer Science St.John’s Canada

Abstract  The coauthorship and coeditorship relations as recorded in the genetic programming bibliography provide a quantitative view
of the GP community. Eigen analysis is used to find the principle components of the community. It shows the major eigenvalues
and eigenvectors are responsible for 70% of the connection graph. Top eigen authors are given.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9060-3
  • Authors
    • W. B. Langdon, University of Essex Departments of Biological and Mathematical Sciences Colchester UK
    • R. Poli, University of Essex Department of Computing and Electronic Systems Colchester UK
    • W. Banzhaf, Memorial University of Newfoundland Department of Computer Science St.John’s Canada

Cyclic metamorphic memory for cellular bio-inspired electronic systems

Abstract  In nature the DNA contained in each cell of an organism, is in fact a memory map that, through the transcription of its genes,
describes the unique characteristics of the individual. Similarly in an artificial embryonic cell, used to construct electronic
systems, a memory map and its relevant gene also describes and determines the functionality of each cell and, collectively,
the entire system. This paper proposes a new variable size memory map based on a novel and efficient gene selection algorithm
that no longer uses the hitherto common address decoding approach to access some fixed memory space. Instead, it applies the
principle of cyclic metamorphic gene selection of the artificial DNA memory. A further benefit of the approach is that the
functionality of the system can also be easily altered through genetic operators or variable memory space environment for
enhanced behaviour. It is suitable therefore for the implementation of GA processes.

  • Content Type Journal Article
  • Category Originall Paper
  • DOI 10.1007/s10710-008-9056-z
  • Authors
    • M. Samie, Shiraz University Department of EE Engineering Shiraz Iran
    • E. Farjah, Shiraz University Department of EE Engineering Shiraz Iran
    • G. Dragffy, University of the West of England Bristol UK

Abstract  In nature the DNA contained in each cell of an organism, is in fact a memory map that, through the transcription of its genes,
describes the unique characteristics of the individual. Similarly in an artificial embryonic cell, used to construct electronic
systems, a memory map and its relevant gene also describes and determines the functionality of each cell and, collectively,
the entire system. This paper proposes a new variable size memory map based on a novel and efficient gene selection algorithm
that no longer uses the hitherto common address decoding approach to access some fixed memory space. Instead, it applies the
principle of cyclic metamorphic gene selection of the artificial DNA memory. A further benefit of the approach is that the
functionality of the system can also be easily altered through genetic operators or variable memory space environment for
enhanced behaviour. It is suitable therefore for the implementation of GA processes.

  • Content Type Journal Article
  • Category Originall Paper
  • DOI 10.1007/s10710-008-9056-z
  • Authors
    • M. Samie, Shiraz University Department of EE Engineering Shiraz Iran
    • E. Farjah, Shiraz University Department of EE Engineering Shiraz Iran
    • G. Dragffy, University of the West of England Bristol UK

Analysis of mass spectrometry data of cerebral stroke samples: an evolutionary computation approach to resolve and quantify peptide peaks

Abstract  A preliminary investigation of cerebral stroke samples injected into a mass spectrometer is performed from an evolutionary
computation perspective. The detection and resolution of peptide peaks is pursued for the purpose of automatically and accurately
determining unlabeled peptide quantities. A theoretical peptide peak model is proposed and a series of experiments are then
pursued (most within a distributed computing environment) along with a data preprocessing strategy that includes (i) a deisotoping
step followed by (ii) a peak picking procedure, followed by (iii) a series of evolutionary computation experiments oriented
towards the investigation of their capability for achieving the aforementioned goal. Results from four different genetic algorithms
(GA) and one differential evolution (DE) algorithm are reported with respect to their ability to find solutions that fit within
the framework of the presented theoretical peptide peak model. Both unconstrained and constrained (as determined by a course
grained preprocessing stage) solution space experiments are performed for both types of evolutionary algorithms. Good preliminary
results are obtained.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9057-y
  • Authors
    • Julio J. Valdés, National Research Council Canada Institute for Information Technology Bldg. M-50, 1200 Montreal Rd. Ottawa ON Canada K1A 0R6
    • Alan J. Barton, National Research Council Canada Institute for Information Technology Bldg. M-50, 1200 Montreal Rd. Ottawa ON Canada K1A 0R6
    • Arsalan S. Haqqani, National Research Council Canada Institute for Biological Sciences 100 Sussex Dr. Ottawa ON Canada K1A 0R6

Abstract  A preliminary investigation of cerebral stroke samples injected into a mass spectrometer is performed from an evolutionary
computation perspective. The detection and resolution of peptide peaks is pursued for the purpose of automatically and accurately
determining unlabeled peptide quantities. A theoretical peptide peak model is proposed and a series of experiments are then
pursued (most within a distributed computing environment) along with a data preprocessing strategy that includes (i) a deisotoping
step followed by (ii) a peak picking procedure, followed by (iii) a series of evolutionary computation experiments oriented
towards the investigation of their capability for achieving the aforementioned goal. Results from four different genetic algorithms
(GA) and one differential evolution (DE) algorithm are reported with respect to their ability to find solutions that fit within
the framework of the presented theoretical peptide peak model. Both unconstrained and constrained (as determined by a course
grained preprocessing stage) solution space experiments are performed for both types of evolutionary algorithms. Good preliminary
results are obtained.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9057-y
  • Authors
    • Julio J. Valdés, National Research Council Canada Institute for Information Technology Bldg. M-50, 1200 Montreal Rd. Ottawa ON Canada K1A 0R6
    • Alan J. Barton, National Research Council Canada Institute for Information Technology Bldg. M-50, 1200 Montreal Rd. Ottawa ON Canada K1A 0R6
    • Arsalan S. Haqqani, National Research Council Canada Institute for Biological Sciences 100 Sussex Dr. Ottawa ON Canada K1A 0R6

Introduction to Evolvable Hardware: A Practical Guide for Designing Self-Adaptive Systems

Introduction to Evolvable Hardware: A Practical Guide for Designing Self-Adaptive Systems

  • Content Type Journal Article
  • Category Book Review
  • DOI 10.1007/s10710-008-9058-x
  • Authors
    • Antonio Mesquita, Programa de Engenharia Eletrica, COPPE/UFRJ CP. 68504, Centro de Tecnologia Ilha do Fundao CEP 21945-970 Rio de Janeiro RJ Brazil

Introduction to Evolvable Hardware: A Practical Guide for Designing Self-Adaptive Systems

  • Content Type Journal Article
  • Category Book Review
  • DOI 10.1007/s10710-008-9058-x
  • Authors
    • Antonio Mesquita, Programa de Engenharia Eletrica, COPPE/UFRJ CP. 68504, Centro de Tecnologia Ilha do Fundao CEP 21945-970 Rio de Janeiro RJ Brazil

Genetic programming for medical classification: a program simplification approach

Abstract  This paper describes a genetic programming (GP) approach to medical data classification problems. In this approach, the evolved
genetic programs are simplified online during the evolutionary process using algebraic simplification rules, algebraic equivalence
and prime techniques. The new simplification GP approach is examined and compared to the standard GP approach on two medical
data classification problems. The results suggest that the new simplification GP approach can not only be more efficient with
slightly better classification performance than the basic GP system on these problems, but also significantly reduce the sizes
of evolved programs. Comparison with other methods including decision trees, naive Bayes, nearest neighbour, nearest centroid,
and neural networks suggests that the new GP approach achieved superior results to almost all of these methods on these problems.
The evolved genetic programs are also easier to interpret than the “hidden patterns” discovered by the other methods.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9059-9
  • Authors
    • Mengjie Zhang, Victoria University of Wellington School of Mathematics, Statistics and Computer Science P.O. Box 600 Wellington New Zealand
    • Phillip Wong, Victoria University of Wellington School of Mathematics, Statistics and Computer Science P.O. Box 600 Wellington New Zealand

Abstract  This paper describes a genetic programming (GP) approach to medical data classification problems. In this approach, the evolved
genetic programs are simplified online during the evolutionary process using algebraic simplification rules, algebraic equivalence
and prime techniques. The new simplification GP approach is examined and compared to the standard GP approach on two medical
data classification problems. The results suggest that the new simplification GP approach can not only be more efficient with
slightly better classification performance than the basic GP system on these problems, but also significantly reduce the sizes
of evolved programs. Comparison with other methods including decision trees, naive Bayes, nearest neighbour, nearest centroid,
and neural networks suggests that the new GP approach achieved superior results to almost all of these methods on these problems.
The evolved genetic programs are also easier to interpret than the “hidden patterns” discovered by the other methods.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9059-9
  • Authors
    • Mengjie Zhang, Victoria University of Wellington School of Mathematics, Statistics and Computer Science P.O. Box 600 Wellington New Zealand
    • Phillip Wong, Victoria University of Wellington School of Mathematics, Statistics and Computer Science P.O. Box 600 Wellington New Zealand