Special Issue on the ‘30th Anniversary of XCS’

Submission open until: October 31, 2024

Guest Editors
Anthony Stein, Tenure Track Professor of Artificial Intelligence in Agricultural Engineering, University of Hohenheim, Germany
Ryan Urbanowicz, Assistant Professor of Computational Biomedicine, Cedars-Sinai, Los Angeles, CA
Will Browne, Professor and Chair of​​ Manufacturing Robotics, Queensland University of Technology, Brisbane, Australia

Learning Classifier Systems (LCSs) are one of, if not, the first Evolutionary Computation algorithms to adopt machine learning methods. Thus, they belong to the class of evolutionary machine learning algorithms. With a rule-based model representation at their core, they possess unique and valuable properties, such as inherent interpretability of learned solutions and the ability to model extremely complex and heterogeneous relationships. LCSs were conceived in the mid 1970s by evolutionary computation pioneer John Holland. At that time, these systems were designed to model adaptive agents in his pursuit to understand complex adaptive systems.

Subsequently, LCSs have proven themselves to be a very effective, flexible, and broadly applicable approach to predictive modeling and sequential problem solving tasks. They have been successful not only in well-recognized benchmark tasks, e.g., exceeding previous limits in solving multiplexer problems, but equally important, these systems often excel at solving complex classification and regression problems in real-world domains such as biomedicine and intelligent system control.

What is XCS?
XCS is the archetypal LCS as it embodies many core principles, whilst acting as a framework to address bespoke problems. It belongs to the category of Michigan-style LCSs, one of the two major families of LCSs algorithms. This style is characterized by adopting an online-learning strategy, and employing steady-state niche genetic algorithms to optimize the coverage of the problem spaces at hand. XCS differs from earlier Michigan-style LCSs by its accuracy-based fitness, which has been shown to lead rule-discovery to explore a complete and maximally compact learned problem solution. XCS is the extension of the Zeroth-level Classifier System, both proposed and made popular by Stewart Wilson in the mid-1990’s [published in ECJ, hence making it an ideal home for this special issue].

Since the inception of XCS, interest in LCSs experienced a new impetus and over three decades of LCS research have been sparked, leading to outstanding advances of the system in terms of algorithmic innovations, formal theoretical understanding, and a wide-range of real-world applications. Even so, there remains enormous potential to expand and improve this class of evolutionary machine learning systems. For example, while the deep learning era has brought many innovations in the utilization of deep neural networks in almost all domains of artificial intelligence, the integration of deep learning with LCSs has, to date, been limited to a handful of promising works that has the potential to lead to a resurgence of interest. Currently, there is a growing interest in neurosymbolic systems where the flexible structure of LCSs provides a framework to integrate connectionist with symbolic learning.

Therefore, for this special issue we solicit papers that explore and contribute to the discussion on open questions such as:

  • How to fuse XCS-based systems with deep learning (or other cutting-edge algorithm) concepts maintaining the idiosyncratic advantages of both: e.g., conducting flexible online interpretable machine learning combined with the ability to efficiently and accurately model extremely complex problems through hierarchical feature learning.
  • What algorithmic and/or theoretical advances are still needed to overcome persisting limitations of XCS, e.g., the maintenance of long action chains in delayed reward settings within contemporary reinforcement learning tasks?
  • What are novel or potentially untapped application domains, where XCS has been found particularly advantageous over other machine learning techniques?
  • What are the latest deep insights into XCS resulting from mathematical analysis, ablation studies or rigorous method interaction analysis?

Article categories and submission instructions
We solicit manuscripts which belong to the following article types offered by ECJ:
Full-length original research articles (including surveys, typically approx. 25 pages)
Letters (short articles, typically approx. 6 pages)
ECJ accepts papers that broadly fall into the three categories: Applications, Experimental Results and Theory. Of course, many papers may fall into more than one category.

The focus of this special issue is not exclusively set on original research papers, but welcomes survey-type papers alike. In case of contributions concentrating on novel applications, it must be thoroughly explained why XCS is particularly suited, what algorithmic adaptations facilitate its adoption and how the presented XCS-based approach compares to alternative methods.
Please carefully follow the general submission guidelines of the Evolutionary Computation journal, which also apply to this special issue. Submissions are handled over the Evolutionary Computation Editorial Manager. Authors must select “Special Issue: 30th Anniversary XCS” as the article type when submitting.

Review and Process
All submissions will receive a minimum of two reviews, with at least one reviewer with a strong LCS background and another reviewer with a more broader perspective on the EC and EML field or, in case of manuscripts focussing on XCS’ application to new domains, one reviewer will be selected from the specific application domain.

Please submit your manuscripts until October 31, 2024.

Anticipated timeline:
Manuscript submission: October 31, 2024
Author notification: April 15, 2025
Revision phase until: September 2025
Finalization: October 2025

We invite prospective authors who plan to contribute a survey-type paper to inform the guest editor team upfront in order to prevent potential duplications of efforts. In case of any questions, don’t hesitate to write an email to: anthony.stein@uni-hohenheim.de

CFP: IEEE CEC 2017 Special Session: Genetics-Based Machine Learning to Evolutionary Machine Learning

Dear Colleagues,

 

We would like to invite you to submit a paper for the Special Session on Genetics-Based Machine Learning to Evolutionary Machine Learning at 2017 IEEE Congress on Evolutionary Computation (CEC 2017), which will be held in Donostia – San Sebastián, Spain,  June 5-8, 2017. If you are interested in our special session and planning to submit a paper, please let us know beforehand. We would like to have a list of tentative papers. Of course, you can submit it without the reply to this message.

 

Special Session:  Genetics-Based Machine Learning to Evolutionary Machine Learning

Organizers: Masaya Nakata, Yusuke Nojima, Will Browne, Keiki Takadama, Tim Kovacs

 

Evolutionary Machine Learning (EML) explores technologies that integrate machine learning with evolutionary computation for tasks including optimization, classification, regression, and clustering. Since machine learning contributes to a local search while evolutionary computation contributes to a global search, one of the fundamental interests in EML is a management of interactions between learning and evolution to produce a system performance that cannot be achieved by either of these approaches alone.

 

Historically, this research area was called genetics-based machine learning (GBML) and it was concerned with learning classifier systems (LCS) with its numerous implementations such as fuzzy learning classifier systems (Fuzzy LCS). More recently, EML has emerged as a more general field than GBML; EML covers a wider range of machine learning adapted methods such as genetic programming for ML, evolving ensembles, evolving neural networks, and genetic fuzzy systems; in short, any combination of evolution and machine learning. EML is consequently a broader, more flexible and more capable paradigm than GBML.

 

From this viewpoint, the aim of this special session is to explore potential EML technologies and clarify new directions for EML to show its prospects. This special session is the third edition of our previous special sessions in CEC2015 and CEC2016. The continuous exploration of this field by organizing the special session in CEC is indispensable to establish the discipline of EML.

– Evolutionary learning systems (e.g., learning classifier systems)

– Evolutionary fuzzy systems

– Evolutionary data mining

– Evolutionary reinforcement learning

– Evolutionary neural networks

– Evolutionary adaptive systems

– Artificial immune systems

– Genetic programming applied to machine learning

– Evolutionary feature selection and construction for machine learning

– Transfer learning; learning blocks of knowledge (memes, code, etc.) and evolving the sharing to related problem domains

– Accuracy-Interpretability trade-off in EML

– Applications and theory of EML

 

Important dates are as follows:

– Paper Submission Deadline: January 16, 2017

– Paper Acceptance Notification: February 26, 2017

– Final Paper Submission Deadline: TBD

– Conference Dates: June 5-8, 2017

 

Further information about the special session and the conference can be found at:

– 2017 IEEE Congress on Evolutionary Computation

http://www.cec2017.org/#special_session_sessions

– Special Session on http://www.cec2017.org

https://sites.google.com/site/cec2017ssndeml/home

 

Best regards,

Masaya, Yusuke, Will, Keiki, Tim

FINAL submission deadline for IEEE WCCI 2016

IEEE WCCI 2016 has been extended till

31st January 2016, 24:00 EST

Special Session on New Directions in Evolutionary Machine Learning

2016 IEEE Congress on Evolutionary Computation (WCCI2016/CEC2016 )

Vancouver, Canada, 25-29 July, 2016

[See previous post below for details of the call for papers for the special session most suited to Genetics-based Machine Learning and Learning Classifier Systems]

Please select the special session under the main research topic (otherwise the paper will be treated as a general paper and may be reviewed by researchers outside of this field):

7be New Directions in Evolutionary Machine Learning

Special Session on New Directions in Evolutionary Machine Learning at WCCI/CEC 2016

Dear LCS, GBML, RBML and EML Researcher,

Apologies for the multiple postings as WCCI/CEC has now approved the Special Sessions.

Please forward this CFP to your colleagues, students, and those who may be interested. Thank you.

Call for Papers

Special Session on New Directions in Evolutionary Machine Learning

2016 IEEE Congress on Evolutionary Computation (WCCI2016/CEC2016 )

Vancouver, Canada, 25-29 July, 2016

Aim and scope:

Evolutionary Machine Learning (EML) explores technologies that integrate machine learning with evolutionary computation for tasks including optimization, classification, regression, and clustering. Since machine learning contributes to a local search while evolutionary computation contributes to a global search, one of the fundamental interests in EML is a management of interactions between learning and evolution to produce a system performance that cannot be achieved by either of these approaches alone.

Historically, this research area was called GBML (genetics-based machine learning) and it was concerned with learning classifier systems (LCS) with its numerous implementations such as fuzzy learning classifier systems (Fuzzy LCS).

Recently, EML has emerged as a more general field than GBML; EML covers a wider range of machine learning adapted methods such as genetic programming for ML, evolving ensembles, evolving neural networks, and genetic fuzzy systems; in short, any combination of evolution and machine learning. EML is consequently a broader, more flexible and more capable paradigm than GBML. From this viewpoint, the aim of this special session is to explore potential EML technologies and clarify new directions for EML to show its prospects.

This special session follows the first successful special session (the largest session among the special sessions) held in CEC 2015. The continuous exploration of this field by organizing the special session in CEC is indispensable to establish the discipline of EML. For this purpose, this special session focuses on, but is not limited to, the following areas in EML:

– Evolutionary learning systems (e.g., learning classifier systems)

– Evolutionary fuzzy systems

– Evolutionary reinforcement learning

– Evolutionary neural networks

– Evolutionary adaptive systems

– Artificial immune systems

– Genetic programming applied to machine learning

– Transfer learning; learning blocks of knowledge (memes, code, etc.) and evolving the sharing to related problem domains

– Accuracy-Interpretability tradeoff in EML

– Applications and theory of EML

 

Organisers:

Will Browne (*1), Keiki Takadama (*2), Yusuke Nojima (*3), Masaya Nakata (*4), Tim Kovacs (*5)

E-mail:

(*1) will.browne@vuw.ac.nz, (*2) keiki@inf.uec.ac.jp, (*3) nojima@cs.osakafu-u.ac.jp,

(*4) m.nakata@cas.hc.uec.ac.jp (*5) tim.kovacs@bristol.ac.uk

Affiliations:

(*1) Victoria University of Wellington, New Zealand

(*2) The University of Electro-Communications, Japan

(*3) Osaka Prefecture University, Japan

(*4) The University of Electro-Communications, Japan

(*5) University of Bristol, UK

 

Associated Website:

https://sites.google.com/site/wcci2016sseml/

 

John H. Holland

Sad news that John Holland passed away on the weekend. A warm obituary can be found here:

http://www.santafe.edu/news/item/in-memoriam-john-holland/

Many people’s lives and research have been touched by his ideas and enthusiasm.  This site definitely would not exist without them.

Curiously, his passing may not have been major mainstream news, but his ideas are. It was interesting to note that fields with his ideas were name checked in the latest Google announcement: https://abc.xyz.

History will undoubtedly recognise John as a pioneer in the computer age.

 

 

 

CEC Deadline extension

Important Dates

Paper submission deadline: December 19, 2014 January 16, 2015 (Extended)

Paper acceptance notification: February 20, 2015

Final paper submission deadline: March 13, 2015

Conference dates: May 25-28, 2015

Special Session on

New Directions in Evolutionary Machine Learning

Motivation

Evolutionary Machine Learning (EML) explores technologies that integrate machine learning with evolutionary computation for tasks including optimization, classification, regression, and clustering. Since machine learning contributes to a local search while evolutionary computation contributes to a global search, one of the fundamental interests in EML is a management of interactions between learning and evolution to produce a system performance that cannot be achieved by either of these approaches alone. Historically, this research area was called GBML (genetics-based machine learning) and it was concerned with learning classifier systems (LCS) with its numerous implementations such as fuzzy learning classifier systems (Fuzzy LCS). More recently, EML has emerged as a more general field than GBML; EML covers a wider range of machine learning adapted methods such as genetic programming for ML, evolving ensembles, evolving neural networks, and genetic fuzzy systems; in short, any combination of evolution and machine learning. EML is consequently a broader, more flexible and more capable paradigm than GBML. From this viewpoint, the aim of this special session is to explore potential EML technologies and clarify new directions for EML to show its prospects. For this purpose, this special session focuses on, but is not limited to, the following areas in EML:

– Evolutionary learning systems (e.g., learning classifier systems) – Evolutionary fuzzy systems

– Evolutionary data mining - Evolutionary reinforcement learning - Evolutionary neural networks

– Evolutionary adaptive systems       – Artificial immune systems - Accuracy-Interpretability tradeoff in EML

– Applications and theory of EML – Genetic programming applied to machine learning

– Evolutionary feature selection and construction for machine learning - Transfer learning; learning blocks of knowledge (memes, code, etc.) and evolving the sharing to related problem domains

Important Dates

Paper submission deadline: December 19, 2014 January 16, 2015 (Extended)

Paper acceptance notification: February 20, 2015

Final paper submission deadline: March 13, 2015

Conference dates: May 25-28, 2015

Paper Submission

Special session papers are treated the same as regular papers and must be submitted via the CEC 2015 submission website. To submit your paper to this special session, you have to choose our special session (ID SS52) on the submission page.

Organizers

  • Keiki Takadama, The University of Electro-Communications, Japan (Contact: keiki@inf.uec.ac.jp)
  • Tim Kovacs, University of Bristol, UK.
  • Yusuke Nojima, Osaka Prefecture University, Japan
  • Will Browne, Victoria University of Wellington, New Zealand
  • Masaya, Nakata, The University of Electro-Communications, Japan

 

Special Session URL: https://sites.google.com/site/cec2015sseml/

Conference URL: http://sites.ieee.org/cec2015/

New Directions in Evolutionary Machine Learning at 2015 IEEE Congress on Evolutionary Computation (CEC 2015)

Call to submit a paper for the special session on New Directions in Evolutionary Machine Learning at 2015 IEEE Congress on Evolutionary Computation (CEC 2015) which will be held in Sendai, Japan at May 25-28, 2015.
If you are interested in our special session and planing to submit a paper, please let us know beforehand. We would like to have a list of tentative papers. Of course, you can submit it without the reply to this message. Please choose the session ID SS52 on the submission system.

Special Session: New Directions in Evolutionary Machine Learning
Organizers: Keiki Takadama, Tim Kovacs, Yusuke Nojima, Will Browne, Masaya Nakata

Evolutionary Machine Learning (EML) explores technologies that integrate machine learning with evolutionary computation for tasks including optimization, classification, regression, and clustering. Since machine learning contributes to a local search while evolutionary computationcontributes to a global search, one of the fundamental interests in EML is a management of interactions between learning and evolution to produce a system performance that cannot be achieved by either of these approaches alone. Historically, this research area was called GBML (genetics-based machine learning) and it was concerned with learning classifier systems (LCS) with its numerous implementations such as fuzzy learning classifier systems(Fuzzy LCS). More recently, EML has emerged as a more general field than GBML; EML covers a wider range of machine learning adapted methods such as genetic programming for ML, evolving ensembles, evolving neural networks, and genetic fuzzy systems; in short, any combination of evolution and machine learning. EML is consequently a broader, more flexible and more capable paradigm than GBML. From this viewpoint, the aim of this special session is to explore potential EML technologies and clarify new directions for EML to show its prospects. For this purpose, this special session focuses on, but is not limited to, the following areas in EML:

– Evolutionary learning systems (e.g., learning classifier systems)
– Evolutionary fuzzy systems
– Evolutionary data mining
– Evolutionary reinforcement learning
– Evolutionary neural networks
– Evolutionary adaptive systems colleagues,
– Artificial immune systems
– Genetic programming applied to machine learning
– Evolutionary feature selection and construction for machine learning
– Transfer learning; learning blocks of knowledge (memes, code, etc.) and evolving the sharing to related problem domains
– Accuracy-Interpretability tradeoff in EML
– Applications and theory of EML

Important dates are as follows:
– Paper Submission Deadline: December 19, 2014
– Paper Acceptance Notification: February 20, 2015
– Final Paper Submission Deadline: March 13, 2015
– Early Registration: March 13, 2015
– Conference Dates: May 25-28, 2015

Further information about the special session and the conference can be found:
– 2015 IEEE Congress on Evolutionary Computation
http://sites.ieee.org/cec2015/
– Special Session on New Directions in EML
https://sites.google.com/site/cec2015sseml/

Best regards,
Keiki, Tim, Yusuke, Will, and Masaya

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Yusuke NOJIMA, Dr.

Dept. of Computer Science and Intelligent Systems Graduate School of Engineering Osaka Prefecture University

Gakuen-cho 1-1, Naka-ku, Sakai, Osaka 599-8531, JAPAN
Phone: +81-72-254-9198, FAX: +81-72-254-9915
Email: nojima@cs.osakafu-u.ac.jp
http://www.cs.osakafu-u.ac.jp/ci/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

ExSTraCS – Extended Supervised Tracking and Classifying System

Ryan Urbanowicz is pleased to announce an advanced LCS for datamining:

 

This advanced machine learning algorithm is a Michigan-style learning classifier system (LCS) developed to specialize in classification, prediction, data mining, and knowledge discovery tasks. Michigan-style LCS algorithms constitute a unique class of algorithms that distribute learned patterns over a collaborative population of of individually interpretable IF:THEN rules, allowing them to flexibly and effectively describe complex and diverse problem spaces. ExSTraCS was primarily developed to address problems in epidemiological data mining to identify complex patterns relating predictive attributes in noisy datasets to disease phenotypes of interest. ExSTraCS combines a number of recent advancements into a single algorithmic platform. It can flexibly handle (1) discrete or continuous attributes, (2) missing data, (3) balanced or imbalanced datasets, and (4) binary or many classes. A complete users guide for ExSTraCS is included. Coded in Python 2.7.

 

[http://sourceforge.net/projects/exstracs/]

Educational LCS

Ryan Urbanowicz is pleased to announce the availability of an educational LCS.

The Educational Learning Classifier System (eLCS) is a set of code demos that are intended to serve as an educational resource to learn the basics of a Michigan-Style Learning Classifier System (modeled after the XCS and UCS algorithm architectures).  This resource includes 5 separate implementation demos that sequentially add major components of the algorithm in order to highlight how these components function, and how they are implemented into the algorithm code.

[http://sourceforge.net/projects/educationallcs/]