IWLCS 2009 review

By Will Browne, Jan Drugowitsch and Jaume Bacardit

The 12th International Workshop on Learning Classifier Systems (LCS) successfully took place on July 9th, 2009 in Montreal, Canada as part of GECCO 09. Its ‘success’ was measured in terms of number of attendees – multiple times the number of presenters, quality of papers, diversity of topics, originality of ideas, active discussions and a convivial atmosphere.

This year’s workshop was deliberately more of a workshop than a mini-conference for a few reasons. A major factor was that LCS papers have an excellent home in the Genetics Based Machine Learning (GBML) track of GECCO with reviewers amenable to the topics and quality of research. Thus the workshop sought to encourage discussion on the subject of the four sessions both to introduce attendees to the field and to further in depth understanding. Efficiency emerged as a very hot topic both in the workshop and in the GBML track, and the related discussion continued long past the scheduled time.

Other topics of great interest included cooperation within sub-populations, coevolution, application areas, platforms for LCS (CUDA, Robotics), advancements/understanding (e.g. XCSF) and model adaptation. The program, including titles of talks, can be found at LCS & GBML Central (http:/www.lcs-gbml.ncsa.uiuc.edu/), which is becoming the central home for LCS on the Web. Researchers were (are) encouraged to post their bios, code, benchmark problems, benchmark results,  technical reports, publishable papers and thoughts/ queries on the field. Importantly, LCS & GBML Central acts as an aggregator so latest work on academic home pages can be piped in.

The discussion topics were:

  1. XCSF Current Capabilities and Challenges
  2. Efficiency
  3. LCS’ suitability for Cognitive Robotics

Pier Luca Lanzi started off the workshop by presenting some work on extending Martin Butz’s theory on the different XCS genetic pressure for tenary representations to interval-based real-valued representations used in XCSF. Most of these pressures were not derivable in closed form but the used approximations were still shown to match well empirical observations. In addition to insight on how these pressures depend on the settings of various system parameters, the point that Pier Luca tried to especially highlight was that for interval-based representation one requires to have some idea about the distribution of specificity/generality of the rules in the population. This stands in contrast to Martin Butz’s work, where the average specificity usually determined the algorithm’s behaviour.

Afterwards, Martin Butz led an insightful discussion on the theory and application of XCSF. Issues identified included the schema challenge (too general an initial population), coverage challenge (too specific an initial population), identification of manifolds and sub-manifolds to map the problem space to the solution space, context dependent mappings, fitness gradient, and r0 value setting. ‘Black Art’ (empirically based) guidelines, such as population size being 10 times the number of anticipated niches, were complemented by theoretical limits and bounds.  Confidence was given that parameter setting should not be an obstacle for practical application with robust ranges, including high learning rates when the Recursive least squares (RLS) algorithm for rule prediction learning is employed.

The second session was mostly dedicated to efficiency issues. Matching is the main CPU bottleneck in LCS with three improvements disseminated.  Pier-Luca Lanzi discussed GPUs (graphical processing units) usage (using the CUDA architecture) for hardware speedup, but noted that an understanding of the match routine’s function was necessary to achieve best performance and provide fair comparison.  Drew Mellor outlined a tree-based approached to avoid redundtant matching operations, that was further illuminated in his track presentation.  Similarly, Tim Kovacs outlined how knowing the match certainty, i.e. don’t cares provide less match certainly than specific bits, can direct the efficiency of the matching process. Jaume Bacardit presented a summary of recently proposed alternatives for matching efficiency boost as well as a series of open questions about these methods.

The third session started with a presentation from Xavier Llora and Jose Garcia Moreno-Torres, which introduced a useful twist to LCS’s model making capabilities. Commonly, LCS induce an input-output model from training data that is hypothesised to be appropriate for predicting previously unknown output from completely new input training data. However, when considering the case of two independent testing laboratories that follow supposedly identical testing procedures any inherent differences in these procedures are likely to be highlighted by a drop in prediction performance from the reference to the new data. For such cases, they proposed evolving a ‘pre’ model that transforms the new data’s inputs such that the predictive performance of the first LCS model is restored. Additionally, the evolved transformation may give insight into the procedural difference between the two data sets. Afterwards, Richard Preen’s talk showed how LCS had been applied to the popular and difficult task of financial forecasting with promising results.

Afterwards, Will Browne posed the question on why the application domain of Cognitive Robotics, which is inherently suited to LCS, had not been further explored by the LCS community? New, cheap, robust, fast learning curve and flexible platforms for both software and hardware were reviewed.  Presented experimental setup showed LCS controlling software and hardware platforms synchronously through the same services.  Furthermore, coupled asynchronous control was presented to show the capabilities of modern platforms for evolutionary cognitive robotic.

In the fourth sessiom, Alex Scheidler presented what was possibly the richest talk of the workshop as it explored a thread that has run through LCS research in a novel and demonstrably workable way.  Namely, how to get sub-groups of classifier to form, communicate in a beneficial way and gracefully evolve.  Previous corporate classifiers and speciation have shown promise, but additional benefit was shown by allowing the action of selected rules within a Pittsburgh rule group to directly address other groups, and by severely limiting the number of rules that a group could maintain.

Next, Stewart Wilson introduced a potentially revolutionary concept for pattern recognition based on communication and coevolution.  Rather than an ‘arms race’ between two competing agents, insight from the competition & cooperation philosophy of LCS was invoked.  Two agents are to evolve patterns for communication between themselves with an evolving ‘sniffer’ attempting to intercept the message for its own reward. As a result, the sending agent evolves patterns that are increasingly hard to intercept, such that the receiving agent needs to evolve increasingly more powerful pattern recognisers.

Notice of the bi-annual international workshop LCS book was given, with the call for the updated papers to follow within the next month. It is worth noting that all recent, relevant LCS work may be submitted even if not submitted to the workshop.

The workshop meal started off in a sub-optimal location with low cost-benefit payoff, which was fortunately rectified by a random walk search of the local neighborhood!  A relaxed and friendly way to close a productive workshop.

This entry was posted in Headline. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *