GECCO 2011 Submission Deadline: January 26, 2011

The Genetic and Evolutionary Computation Conference (GECCO-2011) is now inviting paper submissions. GECCO 2011 will be held in Dublin, Ireland, from July 12th till July 16th. The full text of the call for paper can be found here. More information

The Genetic and Evolutionary Computation Conference (GECCO-2011) is now inviting paper submissions. GECCO 2011 will be held in Dublin, Ireland, from July 12th till July 16th. The full text of the call for paper can be found here. More information about GECCO 2011 can be found on the conference website, Twitter, and Facebook. The paper submission deadline is January 26, 2011.

GECCO 2011: Call for Papers

The Genetic and Evolutionary Computation Conference (GECCO-2011) is now inviting paper submissions for the conference to be held in Dublin, Ireland, July 12-16, 2011. GECCO-2011 will present the latest high-quality results in the growing field of genetic and evolutionary computation. … Continue reading

The Genetic and Evolutionary Computation Conference (GECCO-2011) is now inviting paper submissions for the conference to be held in Dublin, Ireland, July 12-16, 2011. GECCO-2011 will present the latest high-quality results in the growing field of genetic and evolutionary computation. A complete call for paper for GECCO 2011 can be found here. More information about GECCO 2011 can be found on the conference website, Twitter, and Facebook. The paper submission deadline is January 26, 2011.

GECCO 2011 Competitions

The GECCO 2011 website has been updated with detailed information of this year’s competitions. Interested? Here is the list of open competitions: Demolition derby Evolutionary art GPUs for genetic and evolutionary computation Simulated car racing championship Visualizing evolution

The GECCO 2011 website has been updated with detailed information of this year’s competitions. Interested? Here is the list of open competitions:

  • Demolition derby
  • Evolutionary art
  • GPUs for genetic and evolutionary computation
  • Simulated car racing championship
  • Visualizing evolution

GECCO 2011 competitions are here

The GECCO 2011 website has been updated with detailed information of this year’s competitions. Interested? Here is the list of open competitions: Demolition derby Evolutionary art GPUs for genetic and evolutionary computation Simulated car racing championship Visualizing evolution

The GECCO 2011 website has been updated with detailed information of this year’s competitions. Interested? Here is the list of open competitions:

  • Demolition derby
  • Evolutionary art
  • GPUs for genetic and evolutionary computation
  • Simulated car racing championship
  • Visualizing evolution

GECCO 2011: Call for Workshop Proposals

The call for workshop proposals for GECCO 2011 is out. The GECCO-2011 program committee invites proposals for workshops to be held in conjunction with the 2011 Genetic and Evolutionary Computation Conference (GECCO-2011) in Dublin, Ireland, July 12-16. The deadline for

The call for workshop proposals for GECCO 2011 is out. The GECCO-2011 program committee invites proposals for workshops to be held in conjunction with the 2011 Genetic and Evolutionary Computation Conference (GECCO-2011) in Dublin, Ireland, July 12-16. The deadline for proposals is November 8th. More info at is available at the online call for workshop proposals document.

Parallel and Distributed Computational Intelligence book is out for pre-order

“Parallel and Distributed Computational Intelligence” edited by Francisco Fernández de Vega & Erick Cantú-Paz and published by Springer is out for pre-order. The first chapter “When Huge is Routine: Scaling Genetic Algorithms and Estimation of Distribution Algorithms via Data-Intensive Computing”

“Parallel and Distributed Computational Intelligence” edited by Francisco Fernández de Vega & Erick Cantú-Paz and published by Springer is out for pre-order. The first chapter “When Huge is Routine: Scaling Genetic Algorithms and Estimation of Distribution Algorithms via Data-Intensive Computing” of the book was written together with coauthors Abhishek Verma, Roy Campbell, and David E. Goldberg describing how data-intensive computing can help push the size of problems that GAs and EDAs can address. You may find the abstact of the book below.

Abstract:

The growing success of biologically inspired algorithms in solving large and complex problems has spawned many interesting areas of research. Over the years, one of the mainstays in bio-inspired research has been the exploitation of parallel and distributed environments to speedup computations and to enrich the algorithms. From the early days of research on bio-inspired algorithms, their inherently parallel nature was recognized and different parallelization approaches have been explored. Parallel algorithms promise reductions in execution time and open the door to solve increasingly larger problems. But parallel platforms also inspire new bio-inspired parallel algorithms that, while similar to their sequential counterparts, explore search spaces differently and offer improvements in solution quality.

The objective in editing this book was to assemble a sample of the best work in parallel and distributed biologically inspired algorithms. The editors invited researchers in different domains to submit their work. They aimed to include diverse topics to appeal to a wide audience. Some of the chapters summarize work that has been ongoing for several years, while others describe more recent exploratory work. Collectively, these works offer a global snapshot of the most recent efforts of bioinspired algorithms’ researchers aiming at profiting from parallel and distributed computer architectures—including GPUs, Clusters, Grids, volunteer computing and p2p networks as well as multi-core processors. This volume will be of value to a wide set of readers, including, but not limited to specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to figure out new paths towards the future of computational intelligence.

Meandre 2.0 Alpha Preview = Scala + MongoDB

A lot of water under the bridge has gone by since the first release of Meandre 1.4.X series. In January I went back to the drawing board and start sketching what was going to be 1.5.X series. The slide deck



A lot of water under the bridge has gone by since the first release of Meandre 1.4.X series. In January I went back to the drawing board and start sketching what was going to be 1.5.X series. The slide deck embedded above is a extended list of the thoughts during the process. As usual, I started collecting feedback from people using 1.4.X in production, things that worked, things that needed improvement, things that were just plain over complicated. The hot recurrent topics that people using 1.4.X could be mainly summarized as:

  • Complex execution concurrency model based on traditional semaphores written in Java (mostly my maintenance nightmare when changes need to be introduced)
  • Server performance bounded by JENA‘s persistent model implementation
  • State caching on individual servers to boost performance increases complexity of single-image cluster deployments
  • Could-deployable infrastructure, but not cloud-friendly infrastructure

As I mentioned, these elements where the main ingredients to target for 1.5.X series. However as the redesign moved forward, the new version represented a radical disruption from 1.4.X series and eventually turned up to become the 2.0 Alpha version described here. The main changes that forced this transition are:

  • Cloud-friendly infrastructure required rethinking of the core functionalities
  • Drastic redesign of the back-end state storage
  • Revisited flow execution engine to support flow execution
  • Changes on the API that render returned JSON documents incompatible with 1.4.X

Meandre 2.0 (currently already available in the the SVN trunk) has been rewritten from scratch using Scala. That decision was motivated to benefit from the Actor model provided by Scala (modeled after Erlang‘s actors). Such model greatly simplify the mechanics of the infrastructure, but it also powered the basis of Snowfield (the effort to create a scalable distributed flow execution engine for Meandre flows). Also, the Scala language expressiveness has greatly reduced the code based size (2.0 code base is roughly 1/3 of the size of 1.4.X series) greatly simplifying the maintenance activities the infrastructure will require as we move forward.

The second big change that pushed the 2.0 Alpha trigger was the redesign of the back end state storage. 1.4.X series heavily relied on the relational storage for persistent RDF models provided by JENA. For performance reasons, JENA caches the model in memory and mostly assumes ownership of the model. Hence, if you want to provide a single-image Meandre cluster you need to inject into JENA cache coherence mechanics, greatly increasing the complexity. Also, the relational implementation relies on the mapping model into a table and triple into a row (this is a bit of a simplification). That implies that large number of SQL statements need to be generated to update models, heavily taxing the relational storage when changes on user repository data needs to be introduced.

An ideal cloud-friendly Meandre infrastructure should not maintain state (neither voluntarily, neither as result of JENA back end). Thus, a fast and scalable back end storage could allow infrastructure servers to maintain no state and be able to provide the appearance of a single image cluster. After testing different alternatives, their community support, and development roadmap, the only option left was MongoDB. Its setup simplicity for small installations and its ability to easily scale to large installations (including cloud-deployed ones) made MongoDB the candidate to maintain state for Meandre 2.0. This was quite a departure from 1.4.x series, where you had the choice to store state via JENA on an embedded Derby or an external MySQL server.

A final note on the building blocks that made possible 2.0 series. Two other side projects where started to support the development of what will become Meandre 2.0.X series:

  1. Crochet: Crochet targets to help quickly prototype REST APIs relying on the flexibility of the Scala language. The initial ideas for Crochet were inspired after reading Gabriele Renzi post on creating a picoframework with Scala (see http://www.riffraff.info/2009/4/11/step-a-scala-web-picoframework) and the need for quickly prototyping APIs for pilot projects. Crochet also provides mechanisms to hide repetitive tasks involved with default responses and authentication/authorization piggybacking on the mechanics provided by application servers.
  2. SnareSnare is a coordination layer for distributed applications written in Scala and relies and MongoDB to implement its communication layer. Snare implements a basic heartbeat system and a simple notification mechanism (peer-to-peer and broadcast communication). Snare relies on MongoDB to track heartbeat and notification mailboxes.

Minor Update

If you used to check my Twitter stream in the center column at the bottom of my blog, you would have noticed  I have just replaced it with my Google Buzz profile stream instead. Yes, you can still see the my Twitter

If you used to check my Twitter stream in the center column at the bottom of my blog, you would have noticed  I have just replaced it with my Google Buzz profile stream instead. Yes, you can still see the my Twitter activity, but you will also be able to (1) see aggregated content/activity coming from other sources in one place, (2) subscribe to the stream, and (3) comment on each of the entries. As, I said, a minor update to improve its functionality a bit more.

IWLCS 2010 – Discussion session on LCS / XCS(F)

I just got an email from Martin Butz about a discussion session being planned for IWLCS 2010 and his request to pass it along. Hope all is well and you are going to attend GECCO this year. Regardless if you

I just got an email from Martin Butz about a discussion session being planned for IWLCS 2010 and his request to pass it along.

Hope all is well and you are going to attend GECCO this year.

Regardless if you attend or not:

Jaume asked me to lead a discussion session on

“LCS representations, operators, and scalability – what is next?”

… or similar during IWLCS… Basically everything besides datamining, because there will be another session on that topic.

So, I am sure you all have some issues in mind that you think should be tackled / addressed / discussed at the workshop and in the near future.

Thus, I would be very happy to receive a few suggestions from your side – anything is welcome – I will then compile the points raised in a few slides to try and get the discussion going at the workshop.

Thank you for any feedback you can provide.

Looking forward to seeing you soon!

Martin

P.S.: Please feel free to also forward this message or tell me, if you think this Email should be still sent to other people…
—-

PD Dr. Martin V. Butz <butz@psychologie.uni-wuerzburg.de>

Department of Psychology III (Cognitive Psychology)
Roentgenring 11
97070 Wuerzburg, Germany
http://www.coboslab.psychologie.uni-wuerzburg.de/people/martin_v_butz/
http://www.coboslab.psychologie.uni-wuerzburg.de
Phone: +49 (0)931 31 82808
Fax:    +49 (0)931 31 82815

LCS and Software Development

“On the Road to Competence” is a slide deck by Jurgen Appelo with interesting analogies between learning classifier systems and software development. Definitely worth taking a look at it. Related posts:NIGEL 2006 Part II: Dasgupta vs. Booker Large Scale Data Mining using Genetics-Based Machine Learning Software for fast rule matching using vector instructions

Related posts:

  1. NIGEL 2006 Part II: Dasgupta vs. Booker
  2. Large Scale Data Mining using Genetics-Based Machine Learning
  3. Software for fast rule matching using vector instructions

“On the Road to Competence” is a slide deck by Jurgen Appelo with interesting analogies between learning classifier systems and software development. Definitely worth taking a look at it.

Related posts:

  1. NIGEL 2006 Part II: Dasgupta vs. Booker
  2. Large Scale Data Mining using Genetics-Based Machine Learning
  3. Software for fast rule matching using vector instructions