Beyond evolutionary trees

Abstract  In Computational Biology, the notion of phylogeny has become synonymous with tree-like evolution. Recent advances in the Life
Sciences have suggested that evolution has a much more diverse course. In this paper we will survey some …

Abstract  

In Computational Biology, the notion of phylogeny has become synonymous with tree-like evolution. Recent advances in the Life
Sciences have suggested that evolution has a much more diverse course. In this paper we will survey some of the models that
have been proposed to overcome the limitations of using phylogenies to represent evolutionary histories.

  • Content Type Journal Article
  • DOI 10.1007/s11047-009-9156-6
  • Authors
    • Gianluca Della Vedova, Università degli Studi di Milano-Bicocca Dipartimento di Statistica Milano Italy
    • Riccardo Dondi, Università degli Studi di Bergamo Dipartimento di Scienze dei Linguaggi, della Comunicazione e degli Studi Culturali Bergamo Italy
    • Tao Jiang, University of California at Riverside Department of Computer Science Riverside CA USA
    • Giulio Pavesi, Università degli Studi di Milano Dipartimento di Informatica e Comunicazione Milano Italy
    • Yuri Pirola, Università degli Studi di Milano-Bicocca Dipartimento di Informatica, Sistemistica e Comunicazione Milano Italy
    • Lusheng Wang, City University of Hong Kong Department of Computer Science Kowloon Hong Kong

Petri nets as a framework for the reconstruction and analysis of signal transduction pathways and regulatory networks

Abstract  Petri nets are directed, weighted bipartite graphs that have successfully been applied to the systems biology of metabolic
and signal transduction pathways in modeling both stochastic (discrete) and deterministic (continuous) proce…

Abstract  

Petri nets are directed, weighted bipartite graphs that have successfully been applied to the systems biology of metabolic
and signal transduction pathways in modeling both stochastic (discrete) and deterministic (continuous) processes. Here we
exemplify how molecular mechanisms, biochemical or genetic, can be consistently respresented in the form of place/transition
Petri nets. We then describe the application of Petri nets to the reconstruction of molecular and genetic networks from experimental
data and their power to represent biological processes with arbitrary degree of resolution of the subprocesses at the cellular
and the molecular level. Petri nets are executable formal language models that permit the unambiguous visualization of regulatory
mechanisms, and they can be used to encode the results of mathematical algorithms for the reconstruction of causal interaction
networks from experimental time series data.

  • Content Type Journal Article
  • Pages 639-654
  • DOI 10.1007/s11047-009-9152-x
  • Authors
    • Wolfgang Marwan, Magdeburg Centre for Systems Biology (MaCS), Otto-von-Guericke-Universität, Magdeburg, Germany
    • Annegret Wagler, Magdeburg Centre for Systems Biology (MaCS), Otto-von-Guericke-Universität, Magdeburg, Germany
    • Robert Weismantel, Magdeburg Centre for Systems Biology (MaCS), Otto-von-Guericke-Universität, Magdeburg, Germany

8a Diada de les Telecomunicacions a Catalunya

The 8th Telecommunications Summit of Catalunya was held on September 29th, 2009 and organized by the Col.legi Oficial d’Enginyers Tècnics de Telecomunicacions.
The discussion panel “Les noves titulacions TIC i la seva adaptació al mercat laboral” (The new Bologna ICT degrees and their adaptation to the labour market) was moderated by Ramon Ollé, executive […]

Diada de les telecomunicacions a Catalunya The 8th Telecommunications Summit of Catalunya was held on September 29th, 2009 and organized by the Col.legi Oficial d’Enginyers Tècnics de Telecomunicacions.

The discussion panel “Les noves titulacions TIC i la seva adaptació al mercat laboral” (The new Bologna ICT degrees and their adaptation to the labour market) was moderated by Ramon Ollé, executive president of BES La Salle. My participation arose the doubt about whether the Bologna process and their new degrees suppose a real change to engineering education. And the answer is: “not necessarily. More efforts should be made to transform engineering education.”

From Galapagos to Twitter: Darwin, Natural Selection, and Web 2.0

Yesterday I was visiting Monmouth College to participate on the Darwinpalooza which commemorates the 200th anniversary of Charles Darwin’s birth and the 150th anniversary of the publication of On the Origin of Species. After scratching my head about about what to present, I came out with quite a mix. You will find the abstract of […]

Related posts:

  1. Challenging lectures on-line at TED
  2. Dusting my Ph.D. thesis off
  3. Scaling Genetic Algorithms using MapReduce

Yesterday I was visiting Monmouth College to participate on the Darwinpalooza which commemorates the 200th anniversary of Charles Darwin’s birth and the 150th anniversary of the publication of On the Origin of Species. After scratching my head about about what to present, I came out with quite a mix. You will find the abstract of the talk below, as well as the slides I used.

Abstract: One hundred and fifty years have passed since the publication of Darwin’s world-changing manuscript “The Origins of Species by Means of Natural Selection”. Darwin’s ideas have proven their power to reach beyond the biology realm, and their ability to define a conceptual framework which allows us to model and understand complex systems. In the mid 1950s and 60s the efforts of a scattered group of engineers proved the benefits of adopting an evolutionary paradigm to solve complex real-world problems. In the 70s, the emerging presence of computers brought us a new collection of artificial evolution paradigms, among which genetic algorithms rapidly gained widespread adoption. Currently, the Internet has propitiated an exponential growth of information and computational resources that are clearly disrupting our perception and forcing us to reevaluate the boundaries between technology and social interaction. Darwin’s ideas can, once again, help us understand such disruptive change. In this talk, I will review the origin of artificial evolution ideas and techniques. I will also show how these techniques are, nowadays, helping to solve a wide range of applications, from life science problems to twitter puzzles, and how high performance computing can make Darwin ideas a routinary tool to help us model and understand complex systems.

Related posts:

  1. Challenging lectures on-line at TED
  2. Dusting my Ph.D. thesis off
  3. Scaling Genetic Algorithms using MapReduce

George Dyson to present at U of I

George Dyson, historian and philosopher of science and author of “Darwin Among Machines” will present two talks as a part of the Colloquium Series “Biology and Beyond”.
On September 29th at 7:00p.m. Dyson will present Darwin Among Machines: From Zoomania to Artificial Life, at Loomis 141.
The next day, September […]

George Dyson, historian and philosopher of science and author of “Darwin Among Machines” will present two talks as a part of the Colloquium Series “Biology and Beyond”.

On September 29th at 7:00p.m. Dyson will present Darwin Among Machines: From Zoomania to Artificial Life, at Loomis 141.

The next day, September 30th 4:00 p.m. he will present Von Neumann’s Universe: Computers and Beyond at 100 Gregory Hall.

The poster can be found here.

A discrete Petri net model for cephalostatin-induced apoptosis in leukemic cells

Abstract  Understanding the mechanisms involved in apoptosis has been an area of extensive study due to its critical role in the development
and homeostasis of multi-cellular organisms. Our special interest lies in understanding the apoptosi…

Abstract  

Understanding the mechanisms involved in apoptosis has been an area of extensive study due to its critical role in the development
and homeostasis of multi-cellular organisms. Our special interest lies in understanding the apoptosis of tumor cells which
is mediated by novel potential drugs. Cephalostatin 1 is a marine compound that can induce apoptosis in leukemic cells in
a dose- and time-dependent manner even at nano-molar concentrations using a recently discovered pathway that excludes the
receptor-mediated pathway and which includes both the mitochondrial and endoplasmic reticulum pathways (Dirsch et al., Cancer
Res 63:8869–8876, 2003; López-Antón et al., J Biol Chem 28:33078–33086, 2006). In this paper, the methods and tools of Petri net theory are used to construct, analyze, and validate a discrete Petri
net model for cephalostatin 1-induced apoptosis. Based on experimental results and literature search, we constructed a discrete
Petri net consisting of 43 places and 59 transitions. Standard Petri net analysis techniques such as structural and invariant
analyses and a recently developed modularity analysis technique using maximal abstract dependent transition sets (ADT sets)
were employed. Results of these analyses revealed model consistency with known biological behavior. The sub-modules represented
by the ADT sets were compared with the functional modules of apoptosis identified by Alberghina and Colangelo (BMC Neurosci
7(Suppl 1):S2, 2006).

  • Content Type Journal Article
  • Pages 993-1015
  • DOI 10.1007/s11047-009-9153-9
  • Authors
    • Eva M. Rodriguez, Department of Mathematics, University of Asia and the Pacific, Pasig City, Philippines
    • Anita Rudy, Department of Pharmacy, Center for Drug Research, Ludwig-Maximilians University, Munich, Germany
    • Ricardo C. H. del Rosario, Institute of Mathematics, University of the Philippines Diliman, Quezon City, Philippines
    • Angelika M. Vollmar, Department of Pharmacy, Center for Drug Research, Ludwig-Maximilians University, Munich, Germany
    • Eduardo R. Mendoza, Department of Computer Science, University of the Philippines Diliman, Quezon City, Philippines

Petri net models for the semi-automatic construction of large scale biological networks

Abstract  For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks.
During the last 15 years, Petri nets have attracted more and more attention to help to solve this k…

Abstract  

For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks.
During the last 15 years, Petri nets have attracted more and more attention to help to solve this key problem. Regarding the
published papers, it seems clear that hybrid functional Petri nets are the adequate method to model complex biological networks.
Today, a Petri net model of biological networks is built manually by drawing places, transitions and arcs with mouse events.
Therefore, based on relevant molecular database and information systems biological data integration is an essential step in
constructing biological networks. In this paper, we will motivate the application of Petri nets for modeling and simulation
of biological networks. Furthermore, we will present a type of access to relevant metabolic databases such as KEGG, BRENDA,
etc. Based on this integration process, the system supports semi-automatic generation of the correlated hybrid Petri net model.
A case study of the cardio-disease related gene-regulated biological network is also presented. MoVisPP is available at http://agbi.techfak.uni-bielefeld.de/movispp/.

  • Content Type Journal Article
  • Pages 1077-1097
  • DOI 10.1007/s11047-009-9151-y
  • Authors
    • Ming Chen, Bioinformatics Department, College of Life Sciences, Zhejiang University, Zijingang Campus, Hangzhou, 310058 China
    • Sridhar Hariharaputran, Bioinformatics Department, Faculty of Technology, Bielefeld University, 33615 Bielefeld, Germany
    • Ralf Hofestädt, Bioinformatics Department, Faculty of Technology, Bielefeld University, 33615 Bielefeld, Germany
    • Benjamin Kormeier, Bioinformatics Department, Faculty of Technology, Bielefeld University, 33615 Bielefeld, Germany
    • Sarah Spangardt, Bioinformatics Department, Faculty of Technology, Bielefeld University, 33615 Bielefeld, Germany

2010 Forum on Philosophy, Engineering & Technology (fPET-2010): Call for papers, 28 Dec 09

fPET-2010, co-organized by IlliGAL lab director, Dave Goldberg, has issued a call for papers:
The 2010 Forum for Philosophy, Engineering & Technology (fPET-2010) to be held 9-10 May 2010 (Sunday Evening-Monday) at the Colorado School of Mines, Golden, CO USA has issued its first call for papers.
Abstracts (500-750 words) are due by 28 December 2009 (Monday) […]

fPET-2010, co-organized by IlliGAL lab director, Dave Goldberg, has issued a call for papers:

The 2010 Forum for Philosophy, Engineering & Technology (fPET-2010) to be held 9-10 May 2010 (Sunday Evening-Monday) at the Colorado School of Mines, Golden, CO USA has issued its first call for papers.

Abstracts (500-750 words) are due by 28 December 2009 (Monday) using the fPET-2010 submissions page on the the webpage www.philengtech.org/submission.  The call for papers may be viewed online here or downloaded as a PDF file here.

For more information about the forum contact Diane Michelfelder (michelfelder@macalester.edu) or Dave Goldberg (deg@illinois.edu).

More information is available at the fPET-2010 website at www.philengtech.org.

Reinforcement Learning, Logic and Evolutionary Computation

Drew Mellor is pleased to announce the publication of his new LCS book. Reinforcement Learning, Logic and Evolutionary Computation: A Learning Classifier System Approach to Relational Reinforcement Learning, published by Lambert Academic Publishing (ISBN 978-3-8383-0196-9).

Abstract Reinforcement learning (RL) consists of methods that automatically adjust behaviour based on numerical rewards and penalties. While use of the attribute-value framework is widespread in RL, it has limited expressive power. Logic languages, such as first-order logic, provide a more expressive framework, and their use in RL has led to the field of relational RL. This thesis develops a system for relational RL based on learning classifier systems (LCS). In brief, the system generates, evolves, and evaluates a population of condition-action rules, which take the form of definite clauses over first-order logic. Adopting the LCS approach allows the resulting system to integrate several desirable qualities: model-free and “tabula rasa” learning; a Markov Decision Process problem model; and importantly, support for variables as a principal mechanism for generalisation. The utility of variables is demonstrated by the system’s ability to learn genuinely scalable behaviour – ! behaviour learnt in small environments that translates to arbitrarily large versions of the environment without the need for retraining.

Liquid: RDF meandering in FluidDB

Meandre (NCSA pushed data-intensive computing infrastructure) relies on RDF to describe components, flows, locations and repositories. RDF has become the central piece that makes possible Meandre’s flexibility and reusability. However, one piece still remains largely sketchy and still has no clear optimal solution: How can we facilitate to anybody sharing, publishing and annotating flows, components, […]

Related posts:

  1. Liquid: RDF endpoint for FluidDB
  2. Meandre: Semantic-Driven Data-Intensive Flows in the Clouds
  3. Meandre 1.4.0 final release candidate tagged

Meandre (NCSA pushed data-intensive computing infrastructure) relies on RDF to describe components, flows, locations and repositories. RDF has become the central piece that makes possible Meandre’s flexibility and reusability. However, one piece still remains largely sketchy and still has no clear optimal solution: How can we facilitate to anybody sharing, publishing and annotating flows, components, locations and repositories? More importantly, how can that be done in the cloud in an open-ended fashion and allow anybody to annotate and comment on each of the afore mentioned pieces?

The FluidDB trip

During my last summer trip to Europe, Terry Jones (CEO) invited me to visit FluidInfo (based in Barcelona) where I also meet Esteve Fernandez (CTO). I had a great opportunity to chat with the masterminds behind an intriguing concept I ran into after a short note I received from David E. Goldberg. FluidDB, the main product being pushed by FluidInfo, is an online collaborative “cloud” database. On FluidInfo words:

FluidDB lets data be social. It allows almost unlimited information personalization by individual users and applications, and also between them. This makes it simple to build a wide variety of applications that benefit from cooperation, and which are open to unanticipated future enhancements. Even more importantly, FluidDB facilitates and encourages the growth of applications that leave users in control of their own data.

FluidDB went live on a private alpha last week. The basic concept behind the scenes is simple. FluidDB stores objects. Objects do not belong to anybody. Objects may be “blank” or they may be about something (e.g. http://seasr.org/meandre). You can create as many blank objects as you want. Creating an object with the same about always returns the same object (thus, there will only be one object about http://seasr.org/meandre). Once objects exists, things start getting more interesting, you can go and tag any object with whatever tag you want. For instance I could tag the http://seasr.org/meandre object hosted_by tag, and assign the tag the value FluidDB introduces one last trick: namespaces. For instance, I got xllora. that means that the above tag I mentioned would look like /tag/xllora/hosted_by. You can create as many nested namespaces under your main namespace as you want. FluidDB also provides mechanisms to control who can query and see the values of your created tags.

As you can see, the basic object model and mechanics is very simple. When the alpha went live, FluidDB only provide access via a simple REST-like HTTP API. In a few days a blossom of client libraries that wrap that API were develop by a dynamic community that gather on #fluiddb channel on irc.freenode.net where FluidDB

You were saying something about RDF

Back to the point. One thing I chatted with the FluidDB guys was what did they think about the similarities between FluidDB’s object model and RDF. After playing with RDF for a while, the FluidDB model look awfully familiar, despite a much simplified and manageable model than RDF. They did not have much to say about it, and the question got stuck in the back of my mind. So when I got access to the private alpha, I could not help it but get down the path of what would it mean to map RDF on FluidDB. Yes, the simple straight answer would be to stick serialized RDF into the value of a given tag (e.g. xllora/rdf). However, that option seemed poor, since I could not exploit the social aspect of collaborative annotations provided by FluidDB. So back to the drawing board. What both models have in common: They are both descriptions about something. In RDF you can see those as the subjects of the triple predicates, whereas in FluidDB those are simple objects. RDF use properties to qualify objects. FluidDB uses tags. Both enable you to add value to qualified objects. Mmh, there you go.

With this idea in mind, I started Liquid, a simple proof-of-concept library that maps RDF on to FluidDB and then it gets it back. There was only one thing that needed a bit of patching. RDF properties are arbitrary URIs. Those could not be easily map on the top of FluidDB tags, so I took a simple compromise route.

  • RDFs subject URIs are mapped onto FluidDB qualified objects via the about tag
  • One FluidDB tag will contain all the properties for that object (basically a simple dictionary encoded in JSON)
  • Reference to other RDF URIs will be mapped on to FluidDB object URIs, and vice versa

Let’s make it a bit more chewable with a simple example.

<?xml version="1.0"?>
 
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cd="http://www.recshop.fake/cd#">
 
<rdf:Description
rdf:about="http://www.recshop.fake/cd/Empire Burlesque">
  <cd:artist>Bob Dylan</cd:artist>
 </rdf:Description>
 
</rdf:RDF>

The above RDF represents a single triple

http://www.recshop.fake/cd/Empire Burlesque	http://www.recshop.fake/cd#artist	   "Bob Dylan"

This triple could be map onto FluidDB by creating one qualified FluidDB object and adding the proper tags. The example below shows how to do so using Python’s fdb.py client library by Nicholas J. Radcliffe.

import fdb,sys
if sys.version_info < (2, 6):
    import simplejson as json
else:
    import json
 
__RDF_TAG__ = 'rdf'
__RDF_TAG_PROPERTIES__  = 'rdf_properties'
__RDF_TAG_MODEL_NAME__ = 'rdf_model_name'
 
#
# Initialize the FluidDB client library
#
f = fdb.FluidDB()
#
# Create the tags (if they exist, this won't hurt)
#
f.create_abstract_tag(__RDF_TAG__)
f.create_abstract_tag(__RDF_TAG_PROPERTIES__)
f.create_abstract_tag(__RDF_TAG_MODEL_NAME__)
#
# Create the subject object of the triple
#	
o = f.create_object('http://www.recshop.fake/cd/Empire Burlesque')
#
# Map RDF properties
#
properties = {'http://www.recshop.fake/cd#artist':['Bob Dylan']}
#
# Tag the object as RDF aware, properties available, and to which model/named graph 
# it belongs
#
f.tag_object_by_id(o.id, __RDF_TAG__)
f.tag_object_by_id(o.id,__RDF_TAG_PROPERTIES__,value=json.dumps(properties))
f.tag_object_by_id(o.id, __RDF_TAG_MODEL_NAME__,'test_dummy')

Running along with this basic idea, I quickly stitched a simple library (Liquid) that allows ingestion and retrieval of RDF from FluidDB. It is still very rudimentary and may not totally map properly all possible RDF, but it is a working proof-of-concept implementation that it is possible to do so.

The Python code above just saves a triple. You can easy retrieve the triple by performing the following operation

import fdb,sys
if sys.version_info < (2, 6):
    import simplejson as json
else:
    import json
 
__RDF_TAG__ = 'rdf'
__RDF_TAG_PROPERTIES__  = 'rdf_properties'
__RDF_TAG_MODEL_NAME__ = 'rdf_model_name'
 
#
# Initialize the FluidDB client library
#
f = fdb.FluidDB()
#
# Retrieve the annotated objects
#
objs = f.query('has xllora/%s'%(__RDF_TAG__))
#
# Optionally you could retrieve the ones only belonging to a given model by
#
# objs = fdb.query('has xllora/%s and xllora/%s matches "%s"'%(__RDF_TAG__,__RDF_TAG_MODEL_NAME__,modelname))
#
subs = [f.get_tag_value_by_id(s,'/tags/fluiddb/about') for s in objs]
props_tmp = [f.get_tag_value_by_id(s,'/tags/xllora/'+__RDF_TAG_PROPERTIES__) for s in objs]
props = [json.loads(s[1]) if s[0]==200 else {} for s in props_tmp]

Now subs contains all the subject URIs for the predicates, and props all the dictionaries containing the properties.

The bottom line

OK. So, what is this mapping important? Basically, it will allow collaborative tagging of the created objects (subjects), allowing a collaborative and social gathering of information, besides them mapped RDF. So, what does it all means?

It basically means, that if you do not have the need to ingest RDF (where property URIs are not directly map and you need to Fluidify/reify), any data stored in FluidDB is already on some form of triplified RDF. Let me explain what I mean by that. Each FluidDB has a unique URI (e.g. http://fluidDB.fluidinfo.com/objects/4fdf7ff4-f0da-4441-8e63-9b98ed26fc12). Each tag is also uniquely identified by an URI (e.g. http://fluidDB.fluidinfo.com/tags/xllora/rdf_model_name). And finally each pair object/tag may have a value (e.g. a literal 'test_dummy' or maybe another URI http://fluidDB.fluidinfo.com/objects/a0dda173-9ee0-4799-a507-8710045d2b07). If a object/tag does not have a value you can just point it to the no value URI (or some other convention you like).

Having said that, now you have all the pieces to express FluidDB data in plain shareable RDF. That would mean basically get all the tags for and object, query the values, and then just generate and RDF model by adding the gathered triples. That’s easy. Also, if you align your properties to tags, the ingestion would also become that trivial. I will try to get that piece into Liquid as soon as other issues allow me to do so :D .

Just to close, I would mention once again a key element of this picture. FluidDB opens the door to a truly cooperative, distributed, and online fluid semantic web. It is one of the first examples of how annotations (a.k.a. metadata) can be easily gathered and used on the “cloud” for the masses. Great job guys!

Related posts:

  1. Liquid: RDF endpoint for FluidDB
  2. Meandre: Semantic-Driven Data-Intensive Flows in the Clouds
  3. Meandre 1.4.0 final release candidate tagged