Special Issue on the ‘30th Anniversary of XCS’

Submission open until: October 31, 2024

Guest Editors
Anthony Stein, Tenure Track Professor of Artificial Intelligence in Agricultural Engineering, University of Hohenheim, Germany
Ryan Urbanowicz, Assistant Professor of Computational Biomedicine, Cedars-Sinai, Los Angeles, CA
Will Browne, Professor and Chair of​​ Manufacturing Robotics, Queensland University of Technology, Brisbane, Australia

Learning Classifier Systems (LCSs) are one of, if not, the first Evolutionary Computation algorithms to adopt machine learning methods. Thus, they belong to the class of evolutionary machine learning algorithms. With a rule-based model representation at their core, they possess unique and valuable properties, such as inherent interpretability of learned solutions and the ability to model extremely complex and heterogeneous relationships. LCSs were conceived in the mid 1970s by evolutionary computation pioneer John Holland. At that time, these systems were designed to model adaptive agents in his pursuit to understand complex adaptive systems.

Subsequently, LCSs have proven themselves to be a very effective, flexible, and broadly applicable approach to predictive modeling and sequential problem solving tasks. They have been successful not only in well-recognized benchmark tasks, e.g., exceeding previous limits in solving multiplexer problems, but equally important, these systems often excel at solving complex classification and regression problems in real-world domains such as biomedicine and intelligent system control.

What is XCS?
XCS is the archetypal LCS as it embodies many core principles, whilst acting as a framework to address bespoke problems. It belongs to the category of Michigan-style LCSs, one of the two major families of LCSs algorithms. This style is characterized by adopting an online-learning strategy, and employing steady-state niche genetic algorithms to optimize the coverage of the problem spaces at hand. XCS differs from earlier Michigan-style LCSs by its accuracy-based fitness, which has been shown to lead rule-discovery to explore a complete and maximally compact learned problem solution. XCS is the extension of the Zeroth-level Classifier System, both proposed and made popular by Stewart Wilson in the mid-1990’s [published in ECJ, hence making it an ideal home for this special issue].

Since the inception of XCS, interest in LCSs experienced a new impetus and over three decades of LCS research have been sparked, leading to outstanding advances of the system in terms of algorithmic innovations, formal theoretical understanding, and a wide-range of real-world applications. Even so, there remains enormous potential to expand and improve this class of evolutionary machine learning systems. For example, while the deep learning era has brought many innovations in the utilization of deep neural networks in almost all domains of artificial intelligence, the integration of deep learning with LCSs has, to date, been limited to a handful of promising works that has the potential to lead to a resurgence of interest. Currently, there is a growing interest in neurosymbolic systems where the flexible structure of LCSs provides a framework to integrate connectionist with symbolic learning.

Therefore, for this special issue we solicit papers that explore and contribute to the discussion on open questions such as:

  • How to fuse XCS-based systems with deep learning (or other cutting-edge algorithm) concepts maintaining the idiosyncratic advantages of both: e.g., conducting flexible online interpretable machine learning combined with the ability to efficiently and accurately model extremely complex problems through hierarchical feature learning.
  • What algorithmic and/or theoretical advances are still needed to overcome persisting limitations of XCS, e.g., the maintenance of long action chains in delayed reward settings within contemporary reinforcement learning tasks?
  • What are novel or potentially untapped application domains, where XCS has been found particularly advantageous over other machine learning techniques?
  • What are the latest deep insights into XCS resulting from mathematical analysis, ablation studies or rigorous method interaction analysis?

Article categories and submission instructions
We solicit manuscripts which belong to the following article types offered by ECJ:
Full-length original research articles (including surveys, typically approx. 25 pages)
Letters (short articles, typically approx. 6 pages)
ECJ accepts papers that broadly fall into the three categories: Applications, Experimental Results and Theory. Of course, many papers may fall into more than one category.

The focus of this special issue is not exclusively set on original research papers, but welcomes survey-type papers alike. In case of contributions concentrating on novel applications, it must be thoroughly explained why XCS is particularly suited, what algorithmic adaptations facilitate its adoption and how the presented XCS-based approach compares to alternative methods.
Please carefully follow the general submission guidelines of the Evolutionary Computation journal, which also apply to this special issue. Submissions are handled over the Evolutionary Computation Editorial Manager. Authors must select “Special Issue: 30th Anniversary XCS” as the article type when submitting.

Review and Process
All submissions will receive a minimum of two reviews, with at least one reviewer with a strong LCS background and another reviewer with a more broader perspective on the EC and EML field or, in case of manuscripts focussing on XCS’ application to new domains, one reviewer will be selected from the specific application domain.

Please submit your manuscripts until October 31, 2024.

Anticipated timeline:
Manuscript submission: October 31, 2024
Author notification: April 15, 2025
Revision phase until: September 2025
Finalization: October 2025

We invite prospective authors who plan to contribute a survey-type paper to inform the guest editor team upfront in order to prevent potential duplications of efforts. In case of any questions, don’t hesitate to write an email to: anthony.stein@uni-hohenheim.de

FINAL submission deadline for IEEE WCCI 2016

IEEE WCCI 2016 has been extended till

31st January 2016, 24:00 EST

Special Session on New Directions in Evolutionary Machine Learning

2016 IEEE Congress on Evolutionary Computation (WCCI2016/CEC2016 )

Vancouver, Canada, 25-29 July, 2016

[See previous post below for details of the call for papers for the special session most suited to Genetics-based Machine Learning and Learning Classifier Systems]

Please select the special session under the main research topic (otherwise the paper will be treated as a general paper and may be reviewed by researchers outside of this field):

7be New Directions in Evolutionary Machine Learning

Special Session on New Directions in Evolutionary Machine Learning at WCCI/CEC 2016

Dear LCS, GBML, RBML and EML Researcher,

Apologies for the multiple postings as WCCI/CEC has now approved the Special Sessions.

Please forward this CFP to your colleagues, students, and those who may be interested. Thank you.

Call for Papers

Special Session on New Directions in Evolutionary Machine Learning

2016 IEEE Congress on Evolutionary Computation (WCCI2016/CEC2016 )

Vancouver, Canada, 25-29 July, 2016

Aim and scope:

Evolutionary Machine Learning (EML) explores technologies that integrate machine learning with evolutionary computation for tasks including optimization, classification, regression, and clustering. Since machine learning contributes to a local search while evolutionary computation contributes to a global search, one of the fundamental interests in EML is a management of interactions between learning and evolution to produce a system performance that cannot be achieved by either of these approaches alone.

Historically, this research area was called GBML (genetics-based machine learning) and it was concerned with learning classifier systems (LCS) with its numerous implementations such as fuzzy learning classifier systems (Fuzzy LCS).

Recently, EML has emerged as a more general field than GBML; EML covers a wider range of machine learning adapted methods such as genetic programming for ML, evolving ensembles, evolving neural networks, and genetic fuzzy systems; in short, any combination of evolution and machine learning. EML is consequently a broader, more flexible and more capable paradigm than GBML. From this viewpoint, the aim of this special session is to explore potential EML technologies and clarify new directions for EML to show its prospects.

This special session follows the first successful special session (the largest session among the special sessions) held in CEC 2015. The continuous exploration of this field by organizing the special session in CEC is indispensable to establish the discipline of EML. For this purpose, this special session focuses on, but is not limited to, the following areas in EML:

– Evolutionary learning systems (e.g., learning classifier systems)

– Evolutionary fuzzy systems

– Evolutionary reinforcement learning

– Evolutionary neural networks

– Evolutionary adaptive systems

– Artificial immune systems

– Genetic programming applied to machine learning

– Transfer learning; learning blocks of knowledge (memes, code, etc.) and evolving the sharing to related problem domains

– Accuracy-Interpretability tradeoff in EML

– Applications and theory of EML

 

Organisers:

Will Browne (*1), Keiki Takadama (*2), Yusuke Nojima (*3), Masaya Nakata (*4), Tim Kovacs (*5)

E-mail:

(*1) will.browne@vuw.ac.nz, (*2) keiki@inf.uec.ac.jp, (*3) nojima@cs.osakafu-u.ac.jp,

(*4) m.nakata@cas.hc.uec.ac.jp (*5) tim.kovacs@bristol.ac.uk

Affiliations:

(*1) Victoria University of Wellington, New Zealand

(*2) The University of Electro-Communications, Japan

(*3) Osaka Prefecture University, Japan

(*4) The University of Electro-Communications, Japan

(*5) University of Bristol, UK

 

Associated Website:

https://sites.google.com/site/wcci2016sseml/

 

John H. Holland

Sad news that John Holland passed away on the weekend. A warm obituary can be found here:

http://www.santafe.edu/news/item/in-memoriam-john-holland/

Many people’s lives and research have been touched by his ideas and enthusiasm.  This site definitely would not exist without them.

Curiously, his passing may not have been major mainstream news, but his ideas are. It was interesting to note that fields with his ideas were name checked in the latest Google announcement: https://abc.xyz.

History will undoubtedly recognise John as a pioneer in the computer age.

 

 

 

CEC Deadline extension

Important Dates

Paper submission deadline: December 19, 2014 January 16, 2015 (Extended)

Paper acceptance notification: February 20, 2015

Final paper submission deadline: March 13, 2015

Conference dates: May 25-28, 2015

Special Session on

New Directions in Evolutionary Machine Learning

Motivation

Evolutionary Machine Learning (EML) explores technologies that integrate machine learning with evolutionary computation for tasks including optimization, classification, regression, and clustering. Since machine learning contributes to a local search while evolutionary computation contributes to a global search, one of the fundamental interests in EML is a management of interactions between learning and evolution to produce a system performance that cannot be achieved by either of these approaches alone. Historically, this research area was called GBML (genetics-based machine learning) and it was concerned with learning classifier systems (LCS) with its numerous implementations such as fuzzy learning classifier systems (Fuzzy LCS). More recently, EML has emerged as a more general field than GBML; EML covers a wider range of machine learning adapted methods such as genetic programming for ML, evolving ensembles, evolving neural networks, and genetic fuzzy systems; in short, any combination of evolution and machine learning. EML is consequently a broader, more flexible and more capable paradigm than GBML. From this viewpoint, the aim of this special session is to explore potential EML technologies and clarify new directions for EML to show its prospects. For this purpose, this special session focuses on, but is not limited to, the following areas in EML:

– Evolutionary learning systems (e.g., learning classifier systems) – Evolutionary fuzzy systems

– Evolutionary data mining - Evolutionary reinforcement learning - Evolutionary neural networks

– Evolutionary adaptive systems       – Artificial immune systems - Accuracy-Interpretability tradeoff in EML

– Applications and theory of EML – Genetic programming applied to machine learning

– Evolutionary feature selection and construction for machine learning - Transfer learning; learning blocks of knowledge (memes, code, etc.) and evolving the sharing to related problem domains

Important Dates

Paper submission deadline: December 19, 2014 January 16, 2015 (Extended)

Paper acceptance notification: February 20, 2015

Final paper submission deadline: March 13, 2015

Conference dates: May 25-28, 2015

Paper Submission

Special session papers are treated the same as regular papers and must be submitted via the CEC 2015 submission website. To submit your paper to this special session, you have to choose our special session (ID SS52) on the submission page.

Organizers

  • Keiki Takadama, The University of Electro-Communications, Japan (Contact: keiki@inf.uec.ac.jp)
  • Tim Kovacs, University of Bristol, UK.
  • Yusuke Nojima, Osaka Prefecture University, Japan
  • Will Browne, Victoria University of Wellington, New Zealand
  • Masaya, Nakata, The University of Electro-Communications, Japan

 

Special Session URL: https://sites.google.com/site/cec2015sseml/

Conference URL: http://sites.ieee.org/cec2015/

New Directions in Evolutionary Machine Learning at 2015 IEEE Congress on Evolutionary Computation (CEC 2015)

Call to submit a paper for the special session on New Directions in Evolutionary Machine Learning at 2015 IEEE Congress on Evolutionary Computation (CEC 2015) which will be held in Sendai, Japan at May 25-28, 2015.
If you are interested in our special session and planing to submit a paper, please let us know beforehand. We would like to have a list of tentative papers. Of course, you can submit it without the reply to this message. Please choose the session ID SS52 on the submission system.

Special Session: New Directions in Evolutionary Machine Learning
Organizers: Keiki Takadama, Tim Kovacs, Yusuke Nojima, Will Browne, Masaya Nakata

Evolutionary Machine Learning (EML) explores technologies that integrate machine learning with evolutionary computation for tasks including optimization, classification, regression, and clustering. Since machine learning contributes to a local search while evolutionary computationcontributes to a global search, one of the fundamental interests in EML is a management of interactions between learning and evolution to produce a system performance that cannot be achieved by either of these approaches alone. Historically, this research area was called GBML (genetics-based machine learning) and it was concerned with learning classifier systems (LCS) with its numerous implementations such as fuzzy learning classifier systems(Fuzzy LCS). More recently, EML has emerged as a more general field than GBML; EML covers a wider range of machine learning adapted methods such as genetic programming for ML, evolving ensembles, evolving neural networks, and genetic fuzzy systems; in short, any combination of evolution and machine learning. EML is consequently a broader, more flexible and more capable paradigm than GBML. From this viewpoint, the aim of this special session is to explore potential EML technologies and clarify new directions for EML to show its prospects. For this purpose, this special session focuses on, but is not limited to, the following areas in EML:

– Evolutionary learning systems (e.g., learning classifier systems)
– Evolutionary fuzzy systems
– Evolutionary data mining
– Evolutionary reinforcement learning
– Evolutionary neural networks
– Evolutionary adaptive systems colleagues,
– Artificial immune systems
– Genetic programming applied to machine learning
– Evolutionary feature selection and construction for machine learning
– Transfer learning; learning blocks of knowledge (memes, code, etc.) and evolving the sharing to related problem domains
– Accuracy-Interpretability tradeoff in EML
– Applications and theory of EML

Important dates are as follows:
– Paper Submission Deadline: December 19, 2014
– Paper Acceptance Notification: February 20, 2015
– Final Paper Submission Deadline: March 13, 2015
– Early Registration: March 13, 2015
– Conference Dates: May 25-28, 2015

Further information about the special session and the conference can be found:
– 2015 IEEE Congress on Evolutionary Computation
http://sites.ieee.org/cec2015/
– Special Session on New Directions in EML
https://sites.google.com/site/cec2015sseml/

Best regards,
Keiki, Tim, Yusuke, Will, and Masaya

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Yusuke NOJIMA, Dr.

Dept. of Computer Science and Intelligent Systems Graduate School of Engineering Osaka Prefecture University

Gakuen-cho 1-1, Naka-ku, Sakai, Osaka 599-8531, JAPAN
Phone: +81-72-254-9198, FAX: +81-72-254-9915
Email: nojima@cs.osakafu-u.ac.jp
http://www.cs.osakafu-u.ac.jp/ci/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

ExSTraCS – Extended Supervised Tracking and Classifying System

Ryan Urbanowicz is pleased to announce an advanced LCS for datamining:

 

This advanced machine learning algorithm is a Michigan-style learning classifier system (LCS) developed to specialize in classification, prediction, data mining, and knowledge discovery tasks. Michigan-style LCS algorithms constitute a unique class of algorithms that distribute learned patterns over a collaborative population of of individually interpretable IF:THEN rules, allowing them to flexibly and effectively describe complex and diverse problem spaces. ExSTraCS was primarily developed to address problems in epidemiological data mining to identify complex patterns relating predictive attributes in noisy datasets to disease phenotypes of interest. ExSTraCS combines a number of recent advancements into a single algorithmic platform. It can flexibly handle (1) discrete or continuous attributes, (2) missing data, (3) balanced or imbalanced datasets, and (4) binary or many classes. A complete users guide for ExSTraCS is included. Coded in Python 2.7.

 

[http://sourceforge.net/projects/exstracs/]

Call For Papers Special Session: Evolutionary Feature Reduction The 10th International Conference on Simulated Evolution And Learning (SEAL 2014)

Call For Papers
Special Session: Evolutionary Feature Reduction
The 10th International Conference on Simulated Evolution And Learning (SEAL 2014)
15-18 December 2014, Dunedin, New Zealand
http://seal2014.otago.ac.nz/

================

Large numbers of features/attributes are often problematic in machine learning and data mining. They lead to conditions known as “the cures of dimensionality”. Feature reduction aims to solve this problem by selecting a small number of original features or constructing a smaller set of new features. Feature selection and construction are challenging tasks due to the large search space and feature interaction problems. Recently, there has been increasing interest in using evolutionary computation approaches to solve these problems.

The theme of this special session is the use of evolutionary computation for feature reduction, covering ALL different evolutionary computation paradigms including evolutionary algorithms, swarm intelligence, learning classifier systems, harmony search, artificial immune systems, and cross-fertilization of evolutionary computation and other techniques such as neural networks, and fuzzy and rough sets. This special session aims to investigate both the new theories and methods in different evolutionary computation paradigms to feature reduction, and the applications of evolutionary computation for feature reduction. Authors are invited to submit their original and unpublished work to this special session.
Topics of interest include but are not limited to:
• Feature ranking/weighting
• Feature subset selection
• Dimensionality reduction
• Feature construction
• Filter, wrapper, and embedded feature selection
• Hybrid feature selection
• Feature reduction for both supervised and unsupervised learning
• Multi-objective feature reduction
• Feature reduction with imbalanced data
• Analysis on evolutionary feature reduction methods
• Real-world applications of evolutionary feature reduction, e.g. gene analysis, bio-marker detection, et al.
================
Important Dates:
28 July 2014, deadline for submission of full papers (<=12 pages)
29 August 2014, Notification of acceptance
16 September 2014, Deadline for camera-ready copies of accepted papers
15-18 December 2014, Conference sessions (including tutorials and workshops)
================
Paper Submission:
You should follow the SEAL 2014 Submission Web Site
(http://seal2014.otago.ac.nz/submissions.aspx). In the Main Research Topic, please choose
“Evolutionary Feature Reduction”
Special session papers are treated the same as regular conference papers. All papers will be fully refereed by a minimum of two specialized referees. Before final acceptance, all referees comments must be considered. All accepted papers that are presented at the conference will be included in the conference proceedings, to be published in Lecture Notes in Computer Science (LNCS) by Springer. Selected papers will be invited for further revision and extension for possible publication in a special issue of a SCI journal after further review Soft Computing (Springer, Impact Factor 1.124).
================
Special Session Organizers:
Dr Bing Xue
School of Engineering and Computer Science, Victoria University of Wellington,
PO Box 600, Wellington, New Zealand.
Email: bing.xue@ecs.vuw.ac.nz
Homepage: http://ecs.victoria.ac.nz/Main/BingXue

Dr Kourosh Neshatian
Computer Science and Software Engineering, College of Engineering, University of Canterbury
Email: kourosh.neshatian@canterbury.ac.nz
Homepage: http://www.cosc.canterbury.ac.nz/kourosh.neshatian/
================
Note: Please reply to me if you want to be a Program Committee Member.

Best regards,
Bing

***************************************
Room CO 351, Cotton Building
Victoria University of Wellington
PO Box 600, Wellington 6140,
New Zealand
Mobile Phone; +64 220327481
Phone: +64-4-463 5233+ext 8874
Email: xuebingfifa@gmail.com
Bing.Xue@ecs.vuw.ac.nz
****************************************

CFP: Evolutionary Machine Learning track at GECCO-2014

** Apologies for multiple postings **

***********************************************************************
** CALL FOR PAPERS                                                                                             **
** 2014 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE   (GECCO-2014) **
** Evolutionary Machine Learning track                                                             **
** July 12-16, 2014, Vancouver, BC, Canada                                                       **
** Organized by ACM SIGEVO                                                                               **
** http://www.sigevo.org/gecco-2014                                                                  **
***********************************************************************

You are invited to plan your participation in the Genetic and
Evolutionary Computation Conference (GECCO 2014). This conference will
present the latest high-quality results in genetic and evolutionary
computation.

** Important dates **

* Abstract submission: January 15, 2014
* Submission of full papers: January 29, 2014
* Notification of paper acceptance: March 12, 2014
* Camera ready submission: April 14, 2014
* Conference: July 12-16, 2014

** Call for Papers: Evolutionary Machine Learning Track **

[New incarnation of GBML-track]

http://www.sigevo.org/gecco-2014/organizers-tracks.html#eml

The Evolutionary Machine Learning (EML) track at GECCO covers all
advances in theory and application of evolutionary computation methods
to Machine Learning (ML) problems. ML presents an array of paradigms —
unsupervised, semi-supervised, supervised, and reinforcement learning —
which frame a wide range of clustering, classification, regression,
prediction and control tasks.

The literature shows that evolutionary methods can tackle many different
tasks within the ML context:
– addressing subproblems of ML e.g. feature selection and construction
– optimising parameters of ML methods, a.k.a. hyper-parameter tuning
– as learning methods for classification, regression or control tasks
– as meta-learners which adapt base learners
* evolving the structure and weights of neural networks
* evolving the data base and rule base in genetic fuzzy systems
* evolving ensembles of base learners

The global search performed by evolutionary methods can complement the
local search of non-evolutionary methods and combinations of the two are
particularly welcome.

Some of the main EML subfields are:
– Learning Classifier Systems (LCS) are rule-based systems introduced by
John Holland in the 1970s. LCSs are one of the most active and
best-developed forms of EML and we welcome all work on them.
– Hyper-parameter tuning with evolutionary methods.
– Genetic Programming (GP) when applied to machine learning tasks (as
opposed to function optimisation).
– Evolutionary ensembles, in which evolution generates a set of learners
which jointly solve problems.
– Evolving neural networks or Neuroevolution when applied to ML tasks.

In addition we encourage submissions including but not limited to the
following:

1. Theoretical advances
– Theoretical analysis of mechanisms and systems
– Identification and modeling of learning and scalability bounds
– Connections and combinations with machine learning theory
– Analysis and robustness in stochastic, noisy, or non-stationary
environments
– Efficient algorithms

2. Modification of algorithms and new algorithms
– Evolutionary rule learning, including but not limited to:
* Michigan style (SCS, NewBoole, EpiCS, ZCS, XCS, UCS…)
* Pittsburgh style (GABIL, GIL, COGIN, REGAL, GA-Miner, GALE, MOLCS,
GAssist…)
* Anticipatory LCS (ACS, ACS2, XACS, YACS, MACS…)
* Iterative Rule Learning Approach (SIA, HIDER, NAX, BioHEL, …)
– Genetic fuzzy systems
– Evolution of Neural Networks
– Evolution of ensemble systems
– Other hybrids combining evolutionary techniques with other machine
learning techniques

3. Issues in EML
– Competent operator design and implementation
– Encapsulation and niching techniques
– Hierarchical architectures
– (Sub-)Structure (building block) identification and linkage learning
– Knowledge representations, extraction and inference
– Data sampling
– Scalability

4. Applications
– Data mining
– Bioinformatics and life sciences
– Rapid application development frameworks for EML
– Robotics, engineering, hardware/software design, and control
– Cognitive systems and cognitive modeling
– Dynamic environments, time series and sequence learning
– Artificial Life
– Economic modelling
– Network security
– Other kinds of real-world ML applications

5. Related Activities
– Visualisation of all aspects of EML (performance, final solutions,
evolution of the population)
– Platforms for EML, e.g. GPGPUs
– Competitive performance, e.g. EML performance in Competitions and Awards
– Education and dissemination of EML, e.g. software for teaching and
exploring aspects of EML.

** Submissions **

Abstracts need to be submitted by January 15, 2014. Full papers are due
by the **__non-extensible deadline__** of January 29, 2014. Detailed
submission instructions can be found at http://www.sigevo.org/gecco-2014.

Each paper submitted to GECCO will be rigorously evaluated in a
double-blind review process. The evaluation is on a per-track basis,
ensuring high interest and expertise of the reviewers. Review criteria
include significance of the work, technical soundness, novelty, clarity,
writing quality, and sufficiency of information to permit replication,
if applicable. All accepted papers will be published in the ACM Digital
Library.

GECCO allows submission of material that is substantially similar to a
paper being submitted contemporaneously for review by another
conference. However, if the submitted paper is accepted by GECCO, the
authors agree that substantially the same material will not be published
by another conference. Material may later be revised and submitted to a
journal, if permitted by the journal.

Researchers are invited to submit abstracts of their work recently
published in top-tier conferences and journals to the new Hot Off the
Press track. Contributions will be selected based on quality and
interest to the GECCO community.

** Track Chairs **

– Jaume Bacardit, jaume.bacardit@newcastle.ac.uk
– Tom Schaul, schaul@gmail.com

Python Code for Generating Multiplexer Data and Datasets

Generate_Multiplexer

A short python script with methods to allow the user to easily generate n-mulitplexer problem data.  Users can either generate single instances, datasets with a specified number of random instances, or complete datasets of all unique multiplexer instances (memory allowing) to be saved in a .txt file.