ARC prize: a call to arms for Genetic Programming by Alberto Tonda

The Abstraction and Reasoning Corpus (ARC) is a benchmark designed to be easy to solve for humans and next to impossible for machine learning techniques which rely upon massive training data sets, like Deep Learning. Google’s François Chollet, the au…

The Abstraction and Reasoning Corpus (ARC) is a benchmark designed to be easy to solve for humans and next to impossible for machine learning techniques which rely upon massive training data sets, like Deep Learning. Google’s François Chollet, the author,  presents ARC as an attempt to push for AI algorithms able to “learn like humans” [1], or in other words, able to solve tasks after seeing just a small number of training instances, exploiting innate capacities to reason on geometry and number [2]. Just a few weeks ago, Chollet announced a Kaggle challenge on ARC, with a prize of 1 million $ [3] and a first deadline in November 2024, although submissions are already open [4].

I am not affiliated with the prize in any way, but I think this could represent a valuable opportunity for the GP/EA community: the tasks in ARC can be solved through *program synthesis*, as stated by Chollet himself [5], a function GP excels at; and the capacity of learning from a few samples is another strong suite of GP. Not to mention, just participating in the contest and comparing against other approaches could lead to cross-fertilization of new ideas, and even help promote GP as a robust AI alternative to the now more prominent DL approaches.
After discussing with colleagues, I decided to spread the news far and wide, in the hope that more and more people from our community would decide to take up the challenge. The current state of the art performance on ARC is still low (less than 40% accuracy on test at the time of writing), so the entry barriers should not be that high: now it’s a good moment for GP researchers to take on the world and test our mettle!
[1] https://arxiv.org/pdf/1911.01547 
[2] https://aiguide.substack.com/p/why-the-abstraction-and-reasoning
[3] https://arcprize.org/competition
[4] https://www.kaggle.com/competitions/arc-prize-2024/
[5] https://arcprize.org/guide

Special Issue on the ‘30th Anniversary of XCS’

Submission open until: October 31, 2024

Guest Editors
Anthony Stein, Tenure Track Professor of Artificial Intelligence in Agricultural Engineering, University of Hohenheim, Germany
Ryan Urbanowicz, Assistant Professor of Computational Biomedicine, Cedars-Sinai, Los Angeles, CA
Will Browne, Professor and Chair of​​ Manufacturing Robotics, Queensland University of Technology, Brisbane, Australia

Learning Classifier Systems (LCSs) are one of, if not, the first Evolutionary Computation algorithms to adopt machine learning methods. Thus, they belong to the class of evolutionary machine learning algorithms. With a rule-based model representation at their core, they possess unique and valuable properties, such as inherent interpretability of learned solutions and the ability to model extremely complex and heterogeneous relationships. LCSs were conceived in the mid 1970s by evolutionary computation pioneer John Holland. At that time, these systems were designed to model adaptive agents in his pursuit to understand complex adaptive systems.

Subsequently, LCSs have proven themselves to be a very effective, flexible, and broadly applicable approach to predictive modeling and sequential problem solving tasks. They have been successful not only in well-recognized benchmark tasks, e.g., exceeding previous limits in solving multiplexer problems, but equally important, these systems often excel at solving complex classification and regression problems in real-world domains such as biomedicine and intelligent system control.

What is XCS?
XCS is the archetypal LCS as it embodies many core principles, whilst acting as a framework to address bespoke problems. It belongs to the category of Michigan-style LCSs, one of the two major families of LCSs algorithms. This style is characterized by adopting an online-learning strategy, and employing steady-state niche genetic algorithms to optimize the coverage of the problem spaces at hand. XCS differs from earlier Michigan-style LCSs by its accuracy-based fitness, which has been shown to lead rule-discovery to explore a complete and maximally compact learned problem solution. XCS is the extension of the Zeroth-level Classifier System, both proposed and made popular by Stewart Wilson in the mid-1990’s [published in ECJ, hence making it an ideal home for this special issue].

Since the inception of XCS, interest in LCSs experienced a new impetus and over three decades of LCS research have been sparked, leading to outstanding advances of the system in terms of algorithmic innovations, formal theoretical understanding, and a wide-range of real-world applications. Even so, there remains enormous potential to expand and improve this class of evolutionary machine learning systems. For example, while the deep learning era has brought many innovations in the utilization of deep neural networks in almost all domains of artificial intelligence, the integration of deep learning with LCSs has, to date, been limited to a handful of promising works that has the potential to lead to a resurgence of interest. Currently, there is a growing interest in neurosymbolic systems where the flexible structure of LCSs provides a framework to integrate connectionist with symbolic learning.

Therefore, for this special issue we solicit papers that explore and contribute to the discussion on open questions such as:

  • How to fuse XCS-based systems with deep learning (or other cutting-edge algorithm) concepts maintaining the idiosyncratic advantages of both: e.g., conducting flexible online interpretable machine learning combined with the ability to efficiently and accurately model extremely complex problems through hierarchical feature learning.
  • What algorithmic and/or theoretical advances are still needed to overcome persisting limitations of XCS, e.g., the maintenance of long action chains in delayed reward settings within contemporary reinforcement learning tasks?
  • What are novel or potentially untapped application domains, where XCS has been found particularly advantageous over other machine learning techniques?
  • What are the latest deep insights into XCS resulting from mathematical analysis, ablation studies or rigorous method interaction analysis?

Article categories and submission instructions
We solicit manuscripts which belong to the following article types offered by ECJ:
Full-length original research articles (including surveys, typically approx. 25 pages)
Letters (short articles, typically approx. 6 pages)
ECJ accepts papers that broadly fall into the three categories: Applications, Experimental Results and Theory. Of course, many papers may fall into more than one category.

The focus of this special issue is not exclusively set on original research papers, but welcomes survey-type papers alike. In case of contributions concentrating on novel applications, it must be thoroughly explained why XCS is particularly suited, what algorithmic adaptations facilitate its adoption and how the presented XCS-based approach compares to alternative methods.
Please carefully follow the general submission guidelines of the Evolutionary Computation journal, which also apply to this special issue. Submissions are handled over the Evolutionary Computation Editorial Manager. Authors must select “Special Issue: 30th Anniversary XCS” as the article type when submitting.

Review and Process
All submissions will receive a minimum of two reviews, with at least one reviewer with a strong LCS background and another reviewer with a more broader perspective on the EC and EML field or, in case of manuscripts focussing on XCS’ application to new domains, one reviewer will be selected from the specific application domain.

Please submit your manuscripts until October 31, 2024.

Anticipated timeline:
Manuscript submission: October 31, 2024
Author notification: April 15, 2025
Revision phase until: September 2025
Finalization: October 2025

We invite prospective authors who plan to contribute a survey-type paper to inform the guest editor team upfront in order to prevent potential duplications of efforts. In case of any questions, don’t hesitate to write an email to: anthony.stein@uni-hohenheim.de