Error-correcting Petri nets

Abstract  The paper introduces error-correcting Petri nets, an algebraic methodology for designing synthetic biologic systems with monitoring
capabilities. Linear error-correcting codes are used to extend the net’s structure in a way that …

Abstract  

The paper introduces error-correcting Petri nets, an algebraic methodology for designing synthetic biologic systems with monitoring
capabilities. Linear error-correcting codes are used to extend the net’s structure in a way that allows for the algebraic
detection and correction of non-reachable net markings. The presented methodology is based on modulo-p Hamming codes—which are optimal for the modulo-p correction of single errors—but also works with any other linear error-correcting code.

  • Content Type Journal Article
  • Pages 711-725
  • DOI 10.1007/s11047-009-9150-z
  • Authors
    • Anastasia Pagnoni, Dipartimento di Informatica e Comunicazione, Università degli Studi di Milano, Milano, Italy

iFoundry iCommunity iLaunch takes place

The Illinois Foundry for Innovation in Engineering Education took place this weekend at the 4H camp at Allerton Park in Monticello, Illinois.  iFoundry freshmen formed an iCommunity consisting of 4 teams.  Many of the ideas in the design of iFoundry are drawn from the practice of genetic algorithms and evolutionary computation in a social setting. […]

The Illinois Foundry for Innovation in Engineering Education took place this weekend at the 4H camp at Allerton Park in Monticello, Illinois.  iFoundry freshmen formed an iCommunity consisting of 4 teams.  Many of the ideas in the design of iFoundry are drawn from the practice of genetic algorithms and evolutionary computation in a social setting. Watch the iLaunch video below:

See other iLaunch materials on the iFoundry website www.ifoundry.illinois.edu.

New book Essentials of Metaheuristics by Sean Luke available online

A new book Essentials of Metaheuristics by Sean Luke is available online. The book can be downloaded for free on its web site. Information about the book from the author’s web site:

This is an open set of lecture notes on metaheuristics algorithms, intended for undergraduate students, practitioners, programmers, and other non-experts. It was developed as […]

A new book Essentials of Metaheuristics by Sean Luke is available online. The book can be downloaded for free on its web site. Information about the book from the author’s web site:

This is an open set of lecture notes on metaheuristics algorithms, intended for undergraduate students, practitioners, programmers, and other non-experts. It was developed as a series of lecture notes for an undergraduate course I taught at GMU. The chapters are designed to be printable separately if necessary. As it’s lecture notes, the topics are short and light on examples and theory. It’s best when complementing other texts. With time, I might remedy this.

A study on diversity for cluster geometry optimization

Abstract  Diversity is a key issue to consider when designing evolutionary approaches for difficult optimization problems. In this paper,
we address the development of an effective hybrid algorithm for cluster geometry optimization. The prop…

Abstract  Diversity is a key issue to consider when designing evolutionary approaches for difficult optimization problems. In this paper,
we address the development of an effective hybrid algorithm for cluster geometry optimization. The proposed approach combines
a steady-state evolutionary algorithm and a straightforward local method that uses derivative information to guide search
into the nearest local optimum. The optimization method incorporates a mechanism to ensure that the diversity of the population
does not drop below a pre-specified threshold. Three alternative distance measures to estimate the dissimilarity between solutions
are evaluated. Results show that diversity is crucial to increase the effectiveness of the hybrid evolutionary algorithm,
as it enables it to discover all putative global optima for Morse clusters up to 80 atoms. A comprehensive analysis is presented
to gain insight about the most important strengths and weaknesses of the proposed approach. The study shows why distance measures
that consider structural information for estimating the dissimilarity between solutions are more suited to this problem than
those that take into account fitness values. A detailed explanation for this differentiation is provided.

  • Content Type Journal Article
  • DOI 10.1007/s12065-009-0020-5
  • Authors
    • Francisco B. Pereira, Instituto Superior de Engenharia de Coimbra 3030-199 Coimbra Portugal
    • Jorge M. C. Marques, Universidade de Coimbra Departamento de Química 3004-535 Coimbra Portugal

AI: Reality or fiction?

It seems that the artificial intelligence related in science fiction is not as far from reality as we used to think. The main character of the film AI, a little […]

It seems that the artificial intelligence related in science fiction is not as far from reality as we used to think. The main character of the film AI, a little boy belonging to a robot series capable of emulating human behavior, is now a model to reach in current scientific projects, which aim at providing machines with consciousness, thoughts, and emotions to interact with human beings. Thus, the world described in Blade Runner, a world where humans and robots coexist and cannot be distinguished with the naked eye, may be just behind the corner.

The advances in AI field, however, start to raise some serious concerns about robot autonomy and its social status as well as how to face this social disruption, and the three Laws elaborated by Asimov to protect humans from machines start to make sense for other than computer geeks. Scientifics are concerned about the “loss of human control of computer-based intelligences”, and the past February, the Association for the Advancement of Artificial Intelligence organized a conference in Asilomar (not a casual place) to discuss the limits of the research in this field. Development of machines that are close to kill autonomously are worth a discussion by those involved in the creation of the brain of such devices. The news of this event has leaked in the Markoff’s article in the New York Times.

On the other hand, who will be responsible for damages caused by these autonomous friends? Themselves or the corresponding designer? In this sense, philosophy should play a leading role in the design and integration of these “future citizens” since they should have a moral system allowing them to learn ethics from experience and people, and also find their place in our society. The latter implies to create a legal framework that defines machine’s civic rights and duties which is a proposal under study (see the news published by “El Periódico”, in Spanish language).

Finally, one may ask whether or not we are ready to live with human emulators. In my view, we are not. Although in the past years we have been skillful to adapt to new and challenging situations, and our experience with immigration integration and race conflicts should help us to welcome these new electronic neighbors, I tend to think that coexistence with robots will be one of the greatest challenges mankind has ever faced. Anyway, we will need to figure out the way to overcome it because the individualism and loneliness ruling our current society is leading us unrelentingly to a future with custom-made roommates.

AI: Reality or fiction?

It seems that the artificial intelligence related in science fiction is not as far from reality as we used to think. The main character of the film AI, a little boy belonging to a robot series capable of emulating human behavior, is now a model to reach in current scientific projects, which aim at providing […]

It seems that the artificial intelligence related in science fiction is not as far from reality as we used to think. The main character of the film AI, a little boy belonging to a robot series capable of emulating human behavior, is now a model to reach in current scientific projects, which aim at providing machines with consciousness, thoughts, and emotions to interact with human beings. Thus, the world described in Blade Runner, a world where humans and robots coexist and cannot be distinguished with the naked eye, may be just behind the corner.

The advances in AI field, however, start to raise some serious concerns about robot autonomy and its social status as well as how to face this social disruption, and the three Laws elaborated by Asimov to protect humans from machines start to make sense for other than computer geeks. Scientifics are concerned about the “loss of human control of computer-based intelligences”, and the past February, the Association for the Advancement of Artificial Intelligence organized a conference in Asilomar (not a casual place) to discuss the limits of the research in this field. Development of machines that are close to kill autonomously are worth a discussion by those involved in the creation of the brain of such devices. The news of this event has leaked in the Markoff’s article in the New York Times.

On the other hand, who will be responsible for damages caused by these autonomous friends? Themselves or the corresponding designer? In this sense, philosophy should play a leading role in the design and integration of these “future citizens” since they should have a moral system allowing them to learn ethics from experience and people, and also find their place in our society. The latter implies to create a legal framework that defines machine’s civic rights and duties which is a proposal under study (see the news published by “El Periódico”, in Spanish language).

Finally, one may ask whether or not we are ready to live with human emulators. In my view, we are not. Although in the past years we have been skillful to adapt to new and challenging situations, and our experience with immigration integration and race conflicts should help us to welcome these new electronic neighbors, I tend to think that coexistence with robots will be one of the greatest challenges mankind has ever faced. Anyway, we will need to figure out the way to overcome it because the individualism and loneliness ruling our current society is leading us unrelentingly to a future with custom-made roommates.

Easy, reliable, and flexible storage for Python

A while ago I wrote a little post about alternative column stores. One that I mentioned was Tokyo Cabinet (and its associated server Tokyo Tyrant. Tokyo Cabinet it is a key-value store written in C and with bindings for multiple languages (including Python and Java). It can maintain data bases in memory or spin them […]

Related posts:

  1. Temporary storage for Meandre’s distributed flow execution
  2. Efficient storage for Python
  3. A simple and flexible GA loop in Python

A while ago I wrote a little post about alternative column stores. One that I mentioned was Tokyo Cabinet (and its associated server Tokyo Tyrant. Tokyo Cabinet it is a key-value store written in C and with bindings for multiple languages (including Python and Java). It can maintain data bases in memory or spin them to disk (you can pick between hash or B-tree based stores).

Having heard a bunch of good things, I finally gave it a try. I just installed both Cabinet and Tyrant (you may find useful installation instructions here using the usual configure, make, make install cycle). Another nice feature of Tyrant is that it also supports HTTP gets and puts. So having all this said, I just wanted to check how easy it was to use it from Python. And the answer was very simple. Joseph Turian’s examples got me running in less than 2 minutes—see the piece of code below—when dealing with a particular data base. Using Tyrant over HTTP is quite simple too—see PeteSearch blog post.

import pytc,pickle
from numpy import *
 
hdb = pytc.HDB()
hdb.open('casket.tch',pytc.HDBOWRITER|pytc.HDBOCREAT)
 
a = arange(100)
hdb.put('test',pickle.dumps(a))
b = pickle.loads(hdb.get('test'))
if (a==b).all() :
     print 'OK'
hdb.close()

Related posts:

  1. Temporary storage for Meandre’s distributed flow execution
  2. Efficient storage for Python
  3. A simple and flexible GA loop in Python

Journal Publication versus Conference Contribution?

In a recent issue of the Communications of the ACM, Moshe Vardi discusses the pros and cons of journal archival publications versus conference contributions. The upshot of his statement, which points to two recent contributions to the viewpoint columns…

In a recent issue of the Communications of the ACM, Moshe Vardi discusses the pros and cons of journal archival publications versus conference contributions. The upshot of his statement, which points to two recent contributions to the viewpoint columns of the journal [1], [2] is that perhaps it is time for Computer Scientists to shift emphasis away from conference and workshop contributions, and start publishing in journal as all other sciences do. A lively discussion followed, see among others, the opinion piece of Lance Fortnow.

As an editor myself of Genetic Programming and Evolvable Machines I have always wondered why it would be more attractive for people in our discipline to publish in conference venues than in archival journals. Are there not enough journals to allow for scientific progress? Or is there a dire need to communicate with colleagues in spatial co-location? Well, to my mind, none of the two! We are not the types of people that wanted to discuss our results to extreme length. Our conferences and workshops usually operate under tight time constraints, and one to three questions is about the average a presenter receives, anything else would eat into the next presenter’s time and is discouraged. Also, the number of journals now accepting work from our field has grown over the years to a very reasonable number so that there is no shortage of places where quality work could find a home.

What is it then, that makes us submit and publish so much at conferences? Possible explanations are the existence of deadlines and the incremental nature of much of the work published. The existence of deadlines is a valuable selection pressure in our hectic times where everything is under the dictate of time-driven priorities. It can only be mimicked by journals through the introduction of regular “special issues” which also come with this requirement, and usually are successful in attracting work. As for the second possible explanation, I’d like to cite from [1] on the pitfalls of program committee work: “And arguably it is the more innovative papers that suffer because they are time consuming to read and understand, so they are the most likely to be either completely misunderstood or underappreciated by an increasingly error-prone process.” So while innovative work has a harder time at conferences, “our culture creates more units to review with a lower density of new ideas.” It is not only that we get to review smaller pieces of work, we are also more busy, with all the workshops and conferences that make us look at these papers. “Genuinely innovative papers that have issues, but could have been conditionally accepted, are all too often rejected in this climate of negativism. So the less ambitious, but well-executed work trumps what could have been the more exciting result.” Those would have to be revised and revised and revised again, and there is no time to do this for conferences. Journal articles, on the other hand, can be worked on for a long time, if need be, and there is no time pressure except for the fact that delays could be unbearable and make results obsolete.

In the end, however, it is the impact of the work that counts most. And it is my experience that a carefully edited journal paper is worth the effort, as it produces impact on a scale that conference papers have diffulty to achieve.

[1] K. Birman and F.B. Schneider. Comm. ACM, 52(5) 2009, p. 34
[2] J. Crowcroft, S. Keshav, and N. McKeown, Comm. ACM, 52(1) 2009, p. 27

Save the Date for Philosophy, Engineering & Technology: 9-10 May 2010

The 2010 Forum on Philosophy, Engineering, and Technology (FPET-2010) will be held on 9-10 May 2010 (Sunday evening through Monday) at the Colorado School of Mines in Golden, CO. The event is an outgrowth of the WPE-2007 and WPE-2008 meetings held in Delft and London.
Philosophical reasoning was important to the writing of The Design of […]

The 2010 Forum on Philosophy, Engineering, and Technology (FPET-2010) will be held on 9-10 May 2010 (Sunday evening through Monday) at the Colorado School of Mines in Golden, CO. The event is an outgrowth of the WPE-2007 and WPE-2008 meetings held in Delft and London.

Philosophical reasoning was important to the writing of The Design of Innovation and DoI author David E. Goldberg is one of FPET-2010’s organizers. More information is available at www.philengtech.org.

IWLCS 2009 review

By Will Browne, Jan Drugowitsch and Jaume Bacardit

The 12th International Workshop on Learning Classifier Systems (LCS) successfully took place on July 9th, 2009 in Montreal, Canada as part of GECCO 09. Its ‘success’ was measured in terms of number of attendees – multiple times the number of presenters, quality of papers, diversity of topics, originality of ideas, active discussions and a convivial atmosphere.

This year’s workshop was deliberately more of a workshop than a mini-conference for a few reasons. A major factor was that LCS papers have an excellent home in the Genetics Based Machine Learning (GBML) track of GECCO with reviewers amenable to the topics and quality of research. Thus the workshop sought to encourage discussion on the subject of the four sessions both to introduce attendees to the field and to further in depth understanding. Efficiency emerged as a very hot topic both in the workshop and in the GBML track, and the related discussion continued long past the scheduled time.

Other topics of great interest included cooperation within sub-populations, coevolution, application areas, platforms for LCS (CUDA, Robotics), advancements/understanding (e.g. XCSF) and model adaptation. The program, including titles of talks, can be found at LCS & GBML Central (http:/www.lcs-gbml.ncsa.uiuc.edu/), which is becoming the central home for LCS on the Web. Researchers were (are) encouraged to post their bios, code, benchmark problems, benchmark results,  technical reports, publishable papers and thoughts/ queries on the field. Importantly, LCS & GBML Central acts as an aggregator so latest work on academic home pages can be piped in.

The discussion topics were:

  1. XCSF Current Capabilities and Challenges
  2. Efficiency
  3. LCS’ suitability for Cognitive Robotics

Pier Luca Lanzi started off the workshop by presenting some work on extending Martin Butz’s theory on the different XCS genetic pressure for tenary representations to interval-based real-valued representations used in XCSF. Most of these pressures were not derivable in closed form but the used approximations were still shown to match well empirical observations. In addition to insight on how these pressures depend on the settings of various system parameters, the point that Pier Luca tried to especially highlight was that for interval-based representation one requires to have some idea about the distribution of specificity/generality of the rules in the population. This stands in contrast to Martin Butz’s work, where the average specificity usually determined the algorithm’s behaviour.

Afterwards, Martin Butz led an insightful discussion on the theory and application of XCSF. Issues identified included the schema challenge (too general an initial population), coverage challenge (too specific an initial population), identification of manifolds and sub-manifolds to map the problem space to the solution space, context dependent mappings, fitness gradient, and r0 value setting. ‘Black Art’ (empirically based) guidelines, such as population size being 10 times the number of anticipated niches, were complemented by theoretical limits and bounds.  Confidence was given that parameter setting should not be an obstacle for practical application with robust ranges, including high learning rates when the Recursive least squares (RLS) algorithm for rule prediction learning is employed.

The second session was mostly dedicated to efficiency issues. Matching is the main CPU bottleneck in LCS with three improvements disseminated.  Pier-Luca Lanzi discussed GPUs (graphical processing units) usage (using the CUDA architecture) for hardware speedup, but noted that an understanding of the match routine’s function was necessary to achieve best performance and provide fair comparison.  Drew Mellor outlined a tree-based approached to avoid redundtant matching operations, that was further illuminated in his track presentation.  Similarly, Tim Kovacs outlined how knowing the match certainty, i.e. don’t cares provide less match certainly than specific bits, can direct the efficiency of the matching process. Jaume Bacardit presented a summary of recently proposed alternatives for matching efficiency boost as well as a series of open questions about these methods.

The third session started with a presentation from Xavier Llora and Jose Garcia Moreno-Torres, which introduced a useful twist to LCS’s model making capabilities. Commonly, LCS induce an input-output model from training data that is hypothesised to be appropriate for predicting previously unknown output from completely new input training data. However, when considering the case of two independent testing laboratories that follow supposedly identical testing procedures any inherent differences in these procedures are likely to be highlighted by a drop in prediction performance from the reference to the new data. For such cases, they proposed evolving a ‘pre’ model that transforms the new data’s inputs such that the predictive performance of the first LCS model is restored. Additionally, the evolved transformation may give insight into the procedural difference between the two data sets. Afterwards, Richard Preen’s talk showed how LCS had been applied to the popular and difficult task of financial forecasting with promising results.

Afterwards, Will Browne posed the question on why the application domain of Cognitive Robotics, which is inherently suited to LCS, had not been further explored by the LCS community? New, cheap, robust, fast learning curve and flexible platforms for both software and hardware were reviewed.  Presented experimental setup showed LCS controlling software and hardware platforms synchronously through the same services.  Furthermore, coupled asynchronous control was presented to show the capabilities of modern platforms for evolutionary cognitive robotic.

In the fourth sessiom, Alex Scheidler presented what was possibly the richest talk of the workshop as it explored a thread that has run through LCS research in a novel and demonstrably workable way.  Namely, how to get sub-groups of classifier to form, communicate in a beneficial way and gracefully evolve.  Previous corporate classifiers and speciation have shown promise, but additional benefit was shown by allowing the action of selected rules within a Pittsburgh rule group to directly address other groups, and by severely limiting the number of rules that a group could maintain.

Next, Stewart Wilson introduced a potentially revolutionary concept for pattern recognition based on communication and coevolution.  Rather than an ‘arms race’ between two competing agents, insight from the competition & cooperation philosophy of LCS was invoked.  Two agents are to evolve patterns for communication between themselves with an evolving ‘sniffer’ attempting to intercept the message for its own reward. As a result, the sending agent evolves patterns that are increasingly hard to intercept, such that the receiving agent needs to evolve increasingly more powerful pattern recognisers.

Notice of the bi-annual international workshop LCS book was given, with the call for the updated papers to follow within the next month. It is worth noting that all recent, relevant LCS work may be submitted even if not submitted to the workshop.

The workshop meal started off in a sub-optimal location with low cost-benefit payoff, which was fortunately rectified by a random walk search of the local neighborhood!  A relaxed and friendly way to close a productive workshop.