Genetic and Evolutionary Computation Conference (GECCO) is one of the most prestigious double-blind peer review conference in Evolutionary Computation. Based on its impact factor, GECCO is 11th in the rankings of 701 international conferences in artificial intelligence, machine learning, robotics, and human-computer interactions. During 2011, GECCO will take place in the beautiful city of Dublin, Ireland between the 12th and 16th of July.
2011 Genetic and Evolutionary Computation Conference (GECCO-2011)
July 12-16, Dublin, Ireland
The Genetics-Based Machine Learning (GBML) track encompasses advancements and new developments in any system that addresses machine learning problems with evolutionary computation methods. Combinations of machine learning with evolutionary computation techniques are particularly welcome.
Machine Learning (ML) presents an array of paradigms — unsupervised, semi-supervised, supervised, and reinforcement learning — which frame a wide range of clustering, classification, regression, prediction and control tasks. The combination of the global search capabilities of Evolutionary Computation with the reinforcement abilities of ML underlies these problem solving tools.
The field of Learning Classifier Systems (LCS), introduced by John Holland in the 1970s, is one of the most active and best-developed forms of GBML and we welcome all work on LCSs. Artificial Immune Systems (AIS) are another family of techniques included in this track, which takes inspiration of different immunological mechanisms in vertebrates in order to solve computational problems. Moreover, neuroevolution technologies, which combine neural network techniques with evolutionary computation, are welcome. However, also any other related technique or approach will be considered gladly. See the list of suggested (but not limited to) topics at:
GECCO is sponsored by the Association for Computing Machinery Special Interest Group on Genetic and Evolutionary Computation (SIGEVO). SIG Services: 2 Penn Plaza, Suite 701, New York, NY, 10121, USA, 1-800-342-6626 (USA and Canada) or +212-626-0500 (Global).
Abstract Reinforcement learning (RL) consists of methods that automatically adjust behaviour based on numerical rewards and penalties. While use of the attribute-value framework is widespread in RL, it has limited expressive power. Logic languages, such as first-order logic, provide a more expressive framework, and their use in RL has led to the field of relational RL. This thesis develops a system for relational RL based on learning classifier systems (LCS). In brief, the system generates, evolves, and evaluates a population of condition-action rules, which take the form of definite clauses over first-order logic. Adopting the LCS approach allows the resulting system to integrate several desirable qualities: model-free and “tabula rasa” learning; a Markov Decision Process problem model; and importantly, support for variables as a principal mechanism for generalisation. The utility of variables is demonstrated by the system’s ability to learn genuinely scalable behaviour – ! behaviour learnt in small environments that translates to arbitrarily large versions of the environment without the need for retraining.
I finally finished transcoding the videos from NIGEL 2006 and started uploading them to Vimeo. Every week I will upload two of them following NIGEL 2006 agenda. I will also embed the slides that are already available on SlideShare, if available for the talk. Enjoy this first release, Wilson vs. Goldberg. I have also included the meeting introduction just for nostalgia purposes.
Introduction
Video
[vimeo clip_id=4479633 width=”432″ height=”320″]
Data mining and knowledge discovery are crucial techniques across many scientific disciplines. Recent developments such as the Genome Project (and its successors) or the construction of the Large Hadron Collider have provided the scientific community with vast amounts of data. Metaheuristics and other evolutionary algorithms have been successfully applied to a large variety of data mining tasks. Competitive metaheuristic approaches are able to deal with rule, tree and prototype induction, neural networks synthesis, fuzzy logic learning, and kernel machines–to mention but a few. Moreover, the inherent parallel nature of some metaheuristics (e.g. evolutionary approaches, particle swarms, ant colonies, etc) makes them perfect candidates for approaching very large-scale data mining problems.
Although a number of recent techniques have applied these methods to complex data mining domains, we are still far from having a deep and principled understanding of how to scale them to datasets of terascale, petascale or even larger scale. In order to achieve and maintain a relevant role in large scale data mining, metaheuristics need, among other features, to have the capacity of processing vast amounts of data in a reasonable time frame, to use efficiently the unprecedented computer power available nowadays due to advances in high performance computing and to produce when possible- human understandable outputs.
Several research topics impinge on the applicability of metaheuristics for data mining techniques: (1) proper scalable learning paradigms and knowledge representations, (2) better understanding of the relationship between the learning paradigms/representations and the nature of the problems to be solved, (3) efficiency enhancement techniques, and (4) visualization tools that expose as much insight as possible to the domain experts based on the learned knowledge.
We would like to invite researchers to submit contributions on the area of large-scale data mining using metaheuristics. Potentially viable research themes are:
Learning paradigms based on metaheuristics, evolutionary algorithms, learning classifier systems, particle swarm, ant colonies, tabu search, simulated annealing, etc
Hybridization with other kinds of machine learning techniques including exact and approximation algorithms
Knowledge representations for large-scale data mining
Advanced techniques for enhanced prediction (classification, regression/function approximation, clustering, etc.) when dealing with large data sets
Papers should be submitted following the Memetic Computing journal guidelines. When submitting the paper please select this special issue as the article type.
Important dates
Manuscript submission: May 31st, 2009
Notification of acceptance: July 31st, 2009
Submission of camera-ready version: Sep 30th, 2009
Guest editors:
Jaume Bacardit
School of Computer Science and School of Biosciences
University of Nottingham jaume.bacardit@nottingham.ac.uk
Xavier LlorÃ
National Center for Supercomputing Applications
University of Illinois at Urbana-Champaign xllora@illinois.edu