GAssist and GALE Now Available in Python

Ryan Urbanowicz has released Python versions of GAssits and GALE!!! Yup, so excited to see a new incarnation of GALE doing the rounds. I cannot wait to get my hands on it. Ryan has also done an excellent job porting UCS, XCS, and MCS to Python and making those implementations available via “LCS & GBML central” for […]

Related posts:

  1. GALE is back!
  2. Fast mutation implementation for genetic algorithms in Python
  3. Transcoding NIGEL 2006 videos

Ryan Urbanowicz has released Python versions of GAssits and GALE!!! Yup, so excited to see a new incarnation of GALE doing the rounds. I cannot wait to get my hands on it. Ryan has also done an excellent job porting UCS, XCS, and MCS to Python and making those implementations available via “LCS & GBML central” for people to use. I think Ryan’s efforts deserve recognition. His code is helping others to have an easier entry to the LCS and GBML.

More information about Ryan’s implementations can found below

Side note: my original GALE implementation can also be downloaded here.

Related posts:

  1. GALE is back!
  2. Fast mutation implementation for genetic algorithms in Python
  3. Transcoding NIGEL 2006 videos

GECCO 2010 Around the Bend, GECCO 2011 Gearing Up

GECCO 2010 is getting ready to open July 7th at Portland (Oregon). But GECCO does not sleep and the 2011 edition is gearing up to take the torch after Portland. There is currently an announcement “GECCO 2011: Call for New Frontiers Track Proposals”. The deadline is June 30th, and you can get more information at […]

Related posts:

  1. GECCO 2011 Submission Deadline: January 26, 2011
  2. GECCO 2011: Call for Workshop Proposals
  3. GECCO 2011 Competitions

GECCO 2010 is getting ready to open July 7th at Portland (Oregon). But GECCO does not sleep and the 2011 edition is gearing up to take the torch after Portland.

There is currently an announcement “GECCO 2011: Call for New Frontiers Track Proposals”. The deadline is June 30th, and you can get more information at http://goo.gl/m4my or at the public GECCO 2011 public announcements Wave (embedded below).

You can follow GECCO 2011 on Twitter at @GECCO2011, join the group at GECCO 2011 on FaceBook or just Wave with GECCO 2011.


Related posts:

  1. GECCO 2011 Submission Deadline: January 26, 2011
  2. GECCO 2011: Call for Workshop Proposals
  3. GECCO 2011 Competitions

GECCO-2011: Call for New Frontiers Track Proposals

2011 Genetic and Evolutionary Computation Conference (GECCO-2011)July 12-16, Dublin, IrelandOrganized by ACM SIGEVO20th International Conference on Genetic Algorithms (ICGA) and the16th Annual Genetic Programming Conference (GP)One Conference – Many Mini-Conferences‚ 15 Program TracksThe organization of GECCO 2011 is well underway … Continue reading

2011 Genetic and Evolutionary Computation Conference (GECCO-2011)July 12-16, Dublin, IrelandOrganized by ACM SIGEVO20th International Conference on Genetic Algorithms (ICGA) and the16th Annual Genetic Programming Conference (GP)One Conference – Many Mini-Conferences‚ 15 Program TracksThe organization of GECCO 2011 is well underway and we are happy to announce a very exciting program of tracks:

  • Ant Colony Optimization and Swarm Intelligence
  • Artificial Life, Evolutionary Robotics,
  • Adaptive Behaviour and Evolvable Hardware
  • Bioinformatics, Computational, Systems and Synthetic Biology
  • Estimation of Distribution Algorithms
  • Evolutionary Strategies, Evolutionary Programming
  • Evolutionary (& Metaheuristics) Combinatorial Optimisation
  • Evolutionary Multiobjective Optimisation
  • Generative and Developmental Systems
  • Genetic Algorithms
  • Genetic Programming
  • Genetics-Based Machine Learning
  • Parallel Evolutionary Systems
  • Real World Applications
  • Search Based Software Engineering
  • Theory

For GECCO-2011, we wish to expand the list of usual tracks with one New Frontiers Tracks (NFTs). The goal of New Frontiers Tracks is to allow our community to explore emerging, exciting new ideas that relate or impinge into Genetic and Evolutionary Computation and Natural Computing.Thus, we are delighted to invite for NFTs proposals.

NEW FRONTIERS TRACK PROPOSAL FORMAT

Proposals should include a track title, the list of up to two co-chairs and an abstract of max 1000 characters. The abstract should both (i) describe the track focus and (ii) point out why the track should be added to GECCO.

SUBMISSION

Proposals for NFTs should be submitted to Natalio Krasnogor (Natalio.Krasnogor@Nottingham.ac.uk) with subject‚ “GECCO 2011 NFT Proposal”‚ not later than 30th June 2010.

SELECTION

The list of all NFTs will be in display at GECCO 2010 and a ballot (details to be announced later) will be carried out during GECCO 2010.Prospective chairs will be allowed to submit a one minute video to support their track. The videos will be put on-line on the GECCO-2011 website.The most voted NFT will be selected for inclusion into GECCO 2011 program. The selected NFT will have the same status, editorial procedures and quality control as normal GECCO tracks. NFTs are expected to run for one year only.We believe this to be an exciting opportunity for the community to explore innovative, perhaps risky & adventurous research topics, within the framework of the top conference in the field. Hence, we very much look forward for a strong response from the community to this call for New Frontiers Track proposals!

Natalio Krasnogor (GECCO 2011, Editor-in-chief) & Pier Luca Lanzi (GECCO 2011, General Chair)

GECCO is sponsored by the Association for Computing Machinery Special Interest Group for Genetic and Evolutionary Computation (ACM SIGEVO).

You can follow GECCO 2011  on Twitter at @GECCO2011, join us at GECCO 2011 FaceBook group or Wave with us.

LCS & GBML Central Gets a New Home

Today I finished migrating the LCS & GBML Central site from its original URL (http://lcs-gbml.ncsa.uiuc.edu) to a more permanent and stable home located at http://gbml.org. The original site is already currently redirecting the trafic to the new site, and it will be doing so for a while to help people transition and update bookmarks and […]

Related posts:

  1. LCS & GBML Central back to production
  2. LCSweb + GBML blog = LCS & GBML Central
  3. New books section on the LCS and GBML web

Today I finished migrating the LCS & GBML Central site from its original URL (http://lcs-gbml.ncsa.uiuc.edu) to a more permanent and stable home located at http://gbml.org. The original site is already currently redirecting the trafic to the new site, and it will be doing so for a while to help people transition and update bookmarks and feed readers.

I have introduced a few changes to the functionality of the original site. Functional changes can be mostly summarized by (1) dropping the forums section and (2) closing comments on posts and pages. Both functionalities, rarely used  in their current form, have been replaced by a simpler public embedded Wave reachable at http://gbml.org/wave. The goal, provide people in the LCS & GBML community a simpler way to discuss, share, and hang out.

About the feeds being aggregated, I have revised the list and added the feeds now available of the table of contents from

I have also added a few other links to relevant research groups doing work on related areas. Please, leave a comment on this post if you know/have a related site that could be aggregated, or if there are missing links to research groups or useful resources.

Related posts:

  1. LCS & GBML Central back to production
  2. LCSweb + GBML blog = LCS & GBML Central
  3. New books section on the LCS and GBML web

Scaling eCGA Model Building via Data-Intensive Computing

I just uploaded the technical report of the paper we put together for CEC 2010 on how we can scale up eCGA using a MapReduce approach. The paper, besides exploring the Hadoop implementation, it also presents some very compelling results obtained with MongoDB (a document based store able to perform parallel MapReduce tasks via sharding). […]

Related posts:

  1. Scaling Genetic Algorithms using MapReduce
  2. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre
  3. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre

I just uploaded the technical report of the paper we put together for CEC 2010 on how we can scale up eCGA using a MapReduce approach. The paper, besides exploring the Hadoop implementation, it also presents some very compelling results obtained with MongoDB (a document based store able to perform parallel MapReduce tasks via sharding). The paper is available as PDF and PS.

Abstract:
This paper shows how the extended compact genetic algorithm can be scaled using data-intensive computing techniques such as MapReduce. Two different frameworks (Hadoop and MongoDB) are used to deploy MapReduce implementations of the compact and extended com- pact genetic algorithms. Results show that both are good choices to deal with large-scale problems as they can scale with the number of commodity machines, as opposed to previous ef- forts with other techniques that either required specialized high-performance hardware or shared memory environments.

Related posts:

  1. Scaling Genetic Algorithms using MapReduce
  2. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre
  3. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre

Soaring the Clouds with Meandre

You may find the slide deck and the abstract for the presentation we delivered today at the “Data-Intensive Research: how should we improve our ability to use data” workshop in Edinburgh. Abstract This talk will focus a highly scalable data intensive infrastructure being developed at the National Center for Supercomputing Application (NCSA) at the University […]

Related posts:

  1. Meandre: Semantic-Driven Data-Intensive Flows in the Clouds
  2. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre
  3. [BDCSG2008] Clouds and ManyCores: The Revolution (Dan Reed)

You may find the slide deck and the abstract for the presentation we delivered today at the “Data-Intensive Research: how should we improve our ability to use data” workshop in Edinburgh.

Abstract

This talk will focus a highly scalable data intensive infrastructure being developed at the National Center for Supercomputing Application (NCSA) at the University of Illinois and will introduce current research efforts to tackle the challenges presented by big-data. Research efforts include exploring potential ways of integration between cloud computing concepts—such as Hadoop or Meandre—and traditional HPC technologies and assets. These architecture models contrast significantly, but can be leveraged by building cloud conduits that connect these resources to provide even greater flexibility and scalability on demand. Orchestrating the physical computational environment requires innovative and sophisticated software infrastructure that can transparently take advantage of the functional features and to negotiate the constraints imposed by this diversity of computational resources. Research conducted during the development of the Meandre infrastructure has lead to the production of an agile conductor able to leverage the particular advantages in the physical diversity. It can also be implemented as services and/or in the context of another application benefitting from it reusability, flexibility, and high-scalability. Some example applications and an introduction to the data intensive infrastructure architecture will be presented to provide an overview of the diverse scope of Meandre usages. Finally, a case will be presented showing how software developers and system designers can easily transition to these new paradigms to address the primary data-deluge challenges and to soar to new heights with extreme application scalability using cloud computing concepts.

Related posts:

  1. Meandre: Semantic-Driven Data-Intensive Flows in the Clouds
  2. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre
  3. [BDCSG2008] Clouds and ManyCores: The Revolution (Dan Reed)

Fast REST API prototyping with Crochet and Scala

I just finished committing the last changes to Crochet and tagged version 0.1.4vcli now publicly available on GitHub (http://github.com/xllora/Crochet). Also feel free to visit the issues page in case you run into question/problems/bugs. Motivation Crochet is a light weight web framework oriented to rapid prototyping of REST APIs. If you are looking for a Rails […]

Related posts:

  1. Meandre 2.0 Alpha Preview = Scala + MongoDB
  2. Meandre is going Scala
  3. Fast mutation implementation for genetic algorithms in Python

I just finished committing the last changes to Crochet and tagged version 0.1.4vcli now publicly available on GitHub (http://github.com/xllora/Crochet). Also feel free to visit the issues page in case you run into question/problems/bugs.

Motivation

Crochet is a light weight web framework oriented to rapid prototyping of REST APIs. If you are looking for a Rails like framework written in Scala, please take a look at Lift at http://liftweb.net/ instead.

Crochet targets quick prototyping of REST APIs relying on the flexibility of the Scala language. The initial ideas for Crochet were inspired while reading Gabriele Renzi post on creating the STEP picoframework with Scala and the need for quickly prototyping APIs for pilot projects. Crochet also provides mechanisms to hide repetitive tasks involved with default responses and authentication/authorization piggybacking on the mechanics provided by application servers.

Who uses Crochet?

Crochet was born from the need for quickly prototyping REST APIs which required exposing legacy code written in Java. I have been actively using Crochet to provide REST APIs for a variety of projects developed at the National Center for Supercomputing Applications. One of the primary adopters and movers of Crochet is the Meandre Infrastructure for data-intensive computing developed under the SEASR project.

Crochet in 2 minuts

Before you start please check you have Scala installed on your system. You can find more information on how to get Scala up and running here.

  1. Get the latest Crochet jar from the Downloads section at GitHub and the third party dependencies.
  2. Copy the following code into a file named hello-world.scala.
    import crochet._
    new Crochet {
         get("/message") { 
             <html>
                   <head><title>Hello World</title></head>
                   <body><h1>Hello World!</h1></body>
             </html>
         }
    } on 8080
  3. Get your server up and running by running (please change the version number if needed)
    $ scala -cp crochet-0.1.4.jar:crochet-3dparty-libraries-0.1.X.jar hello-world.scala
  4. You just have your first _Crochet_ API up and running. You can check the API working by opening your browser and pointing it to http://localhost:8080/message and you should get the message Hello World! back.

    Where to go from here?

    You will find more information on the Crochet wiki at GitHub. The wiki contains basic information as a QuickStart guide (which also includes how to deal with static content), descriptions of the basic concepts used in Crochet, and several examples that can get up and running fast.

    Related posts:

    1. Meandre 2.0 Alpha Preview = Scala + MongoDB
    2. Meandre is going Scala
    3. Fast mutation implementation for genetic algorithms in Python

GECCO 2010 Submission Deadline (Extended)

If you are planning to submit a paper for the 2010 Genetic and Evolutionary Computation Conference, the deadline is January 13, 2010 (and now extended to January 27th). You can find more information at the GECCO 2010 calendar site. Related posts:GECCO 2009 paper submission deadline extended till January 28 GECCO 2007 deadline extended GECCO-2006 submissions […]

Related posts:

  1. GECCO 2009 paper submission deadline extended till January 28
  2. GECCO 2007 deadline extended
  3. GECCO-2006 submissions deadline extended to February 1st

If you are planning to submit a paper for the 2010 Genetic and Evolutionary Computation Conference, the deadline is January 13, 2010 (and now extended to January 27th). You can find more information at the GECCO 2010 calendar site.

Related posts:

  1. GECCO 2009 paper submission deadline extended till January 28
  2. GECCO 2007 deadline extended
  3. GECCO-2006 submissions deadline extended to February 1st

Meandre is going Scala

After quite a bit of experimenting with different alternatives, Meandre is moving into Scala. Scala is a general purpose programming language designed to express common programming patterns in a concise, elegant, and type-safe way. This is not a radical process, but a gradual one while I am starting to revisit the infrastructure for the next […]

Related posts:

  1. Fast REST API prototyping with Crochet and Scala
  2. Meandre: Semantic-Driven Data-Intensive Flow Engine
  3. Meandre Infrastructure 1.4 RC1 tagged

After quite a bit of experimenting with different alternatives, Meandre is moving into Scala. Scala is a general purpose programming language designed to express common programming patterns in a concise, elegant, and type-safe way. This is not a radical process, but a gradual one while I am starting to revisit the infrastructure for the next major release. Scala also generates code for the JVM making mix and match trivial. I started fuzzing around with Scala back when I started the development of Meandre during the summer of 2007, however I did fall back to Java since that was what most of the people in the group was comfortable with. I was fascinated with Scala fusion of object oriented programming and functional programming. Time went by and the codebase has grown to a point that I cannot stand anymore cutting through the weeds of Java when I have to extend the infrastructure or do bug fixing—not to mention its verbosity even for writing trivial code.

This summer I decided to go on a quest to get me out of the woods. I do not mind relying on the JVM and the large collection of libraries available, but I would also like to get my sanity back. Yes, I tested some of the usual suspects for the JVM (Jython, JRuby, Clojure, and Groovy) but not quite what I wanted. For instance, I wrote most of the Meandre infrastructure services using Jython (much more concise than Java), but still not quite happy to jump on that boat. Clojure is also interesting (functional programming) but it would be hard to justify for the group to move into it since not everybody may feel comfortable with a pure functional language. I also toyed with some not-so-usual ones like Erlang and Haskell, but again, I ended up with no real argument that could justify such a decision.

So, as I started doing back in 2007, I went back to my original idea of using Scala and its mixed object-oriented- and functional-programming- paradigm. To test it seriously, I started developing the distributed execution engine for Meandre in Scala using its Earlang-inspired actors. And, boom, suddenly I found myself spending more time thinking that writing/debugging threaded/networking code :D . Yes, I regret my 2007 decision instead of running with my original intuition, but better late than never. With a working seed of the distributed engine working and tested (did I mention that scalacheck and specs are really powerful tools for behavior driven development?), I finally decided to start gravitating the Meandre infrastructure development effort from Java to Scala—did I mention that Scala is Martin Odersky’s child? Yes, such a decision has some impact on my colleagues, but I envision that the benefits will eventually weight out the initial resistance and step learning curve. At least, the last two group meetings nobody jumped off the window while presenting the key elements of Scala, and demonstrating how concise and elegant it made the first working seed of the distributed execution engine :D . We even got in discussions about the benefits of using Scala if it delivered everything I showed. I am lucky to work with such smart guys. If you want to take a peek at the distributed execution engine (a.k.a. Snowfield) at SEASR’s Fisheye.

Oh, one last thing. Are you using Atlassian’s Fisheye? Do you want syntax highlighting for Scala? I tweaked the Java definitions to make it highlight Scala code. Remember to drop the scala.def file on $FISHEYE_HOME/syntax directory add an entry on the filename.map to make it highlight anything with extension .scala.

Related posts:

  1. Fast REST API prototyping with Crochet and Scala
  2. Meandre: Semantic-Driven Data-Intensive Flow Engine
  3. Meandre Infrastructure 1.4 RC1 tagged

Scaling Genetic Algorithms using MapReduce

Below you may find the abstract to and the link to the technical report of the paper entitled “Scaling Genetic Algorithms using MapReduce” that will be presented at the Ninth International Conference on Intelligent Systems Design and Applications (ISDA) 2009 by Verma, A., Llorà, X., Campbell, R.H., Goldberg, D.E. next month. Abstract:Genetic algorithms(GAs) are increasingly […]

Related posts:

  1. Scaling eCGA Model Building via Data-Intensive Computing
  2. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre
  3. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre

Below you may find the abstract to and the link to the technical report of the paper entitled “Scaling Genetic Algorithms using MapReduce” that will be presented at the Ninth International Conference on Intelligent Systems Design and Applications (ISDA) 2009 by Verma, A., Llorà, X., Campbell, R.H., Goldberg, D.E. next month.

Abstract:Genetic algorithms(GAs) are increasingly being applied to large scale problems. The traditional MPI-based parallel GAs do not scale very well. MapReduce is a powerful abstraction developed by Google for making scalable and fault tolerant applications. In this paper, we mould genetic algorithms into the the MapReduce model. We describe the algorithm design and implementation of GAs on Hadoop, the open source implementation of MapReduce. Our experiments demonstrate the convergence and scalability upto 105 variable problems. Adding more resources would enable us to solve even larger problems without any changes in the algorithms and implementation.

The draft of the paper can be downloaded as IlliGAL TR. No. 2009007. For more information see the IlliGAL technical reports web site.

Related posts:

  1. Scaling eCGA Model Building via Data-Intensive Computing
  2. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre
  3. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre