Nice documentary from ABC about the science of networks

The new issue of the Edge has a post about the documentary “HOW KEVIN BACON CURED CANCER” from ABC Television in Australia featuring the work of Duncan Watts, Steven Strogatz and Albert-Laszlo Barabasi on their research on networks. The vi…

The new issue of the Edge has a post about the documentary “HOW KEVIN BACON CURED CANCER” from ABC Television in Australia featuring the work of Duncan Watts, Steven Strogatz and Albert-Laszlo Barabasi on their research on networks. The video is available here.

Meandre: Semantic-Driven Data-Intensive Flows in the Clouds

Abstract:Data-intensive flow computing allows efficient processing of large volumes of data otherwise unapproachable. This paper introduces a new semantic-driven data-intensive flow infrastructure which: (1) provides a robust and transparent scalable solution from a laptop to large-scale clusters,(2) creates an unified solution for batch and interactive tasks in high-performance computing environments, and (3) encourages reusing and […]

Abstract:Data-intensive flow computing allows efficient processing of large volumes of data otherwise unapproachable. This paper introduces a new semantic-driven data-intensive flow infrastructure which: (1) provides a robust and transparent scalable solution from a laptop to large-scale clusters,(2) creates an unified solution for batch and interactive tasks in high-performance computing environments, and (3) encourages reusing and sharing components. Banking on virtualization and cloud computing techniques the Meandre infrastructure is able to create and dispose Meandre clusters on demand, being transparent to the final user. This paper also presents a prototype of such clustered infrastructure and some results obtained using it. 

Presentazioni del Corso di Laboratorio

Le slide delle presentazioni tenute per il corso di laboratorio dal prof. Matteucci e dall Ing. Loiacono sono disponibili in linea:

Presentazione Ing. Loiacono (PDF)
Presentazione Prof. Matteucci (PDF)

Le slide delle presentazioni tenute per il corso di laboratorio dal prof. Matteucci e dall Ing. Loiacono sono disponibili in linea:

  • Presentazione Ing. Loiacono (PDF)
  • Presentazione Prof. Matteucci (PDF)

Evolutionary Computation in Conceptual Clustering and Tagging

Abstract: The Web 2.0 technologies provide users with collaborative work-spaces over the Internet. For example, Wikipedia is an open source encyclopedia that anyone can edit articles. YouTube provides spaces where users can share videos and annotations about them. Users can put images on Flickers and collaborate each other by categorizing with tagging. These contents are created […]

Abstract: The Web 2.0 technologies provide users with collaborative work-spaces over the Internet. For example, Wikipedia is an open source encyclopedia that anyone can edit articles. YouTube provides spaces where users can share videos and annotations about them. Users can put images on Flickers and collaborate each other by categorizing with tagging. These contents are created by users’ voluntary activities, which is one of the features for the Web 2.0 technology. Some services based on the Web 2.0 have well organized text-based contents on them. For example, Wikipedia has well structured contents, which is due to voluntary activities of the users trying to edit each contents so as to be sophisticated. On the other hands, other services, such as YouTube and Flickers’, only have short sentences or small number of words as annotations. Additionally these annotations are usually different according to each user because participants are not in situation of collaborations. As a result, annotations do not have meaning for videos and pictures. A system that converts annotations into meaningful texts is useful because building texts requires resources while a number of annotations are available.The purpose of this thesis is development of the text builder which is based on the Web 2.0 technology with genetic algorithms and natural language processing. A human interactions system is developed in this thesis for automatically building meaningful tags from annotations. The system consists of mainly two parts: a conceptual clustering component based on natural language processing and a sentence creating component based on genetic algorithms. The conceptual clustering component decomposes annotations into phrases with main ideas. Then, the sentence creating component builds several tags from the phrases. Thirdly created tags are evaluated by users to find better explanations for videos and pictures. Participants are supposed to collaborate through evaluations to create more organized and meaningful tags.The developed system succeed in creating tags from annotations without structures through user-machine interactions. This system is applicable to other systems which has only annotations as participants’ comments.  Because tags created by this system have meanings but short length a system building longer text as sentences is left as future works.

An Analysis of Matching in Learning Classifier Systems

Abstract: We investigate rule matching in learning classifier systems for problems involving binary and real inputs. We consider three rule encodings: the widely used character-based encoding, a specificity-based encoding, and a binary encoding used in Alecsys. We compare the performance of the three algorithms both on matching alone and on typical test problems. The results on […]

Abstract: We investigate rule matching in learning classifier systems for problems involving binary and real inputs. We consider three rule encodings: the widely used character-based encoding, a specificity-based encoding, and a binary encoding used in Alecsys. We compare the performance of the three algorithms both on matching alone and on typical test problems. The results on matching alone show that the population generality influences the performance of the matching algorithms based on string representations in different ways. Character-based encoding becomes slower and slower and generality increases, specificity-based encoding becomes faster and faster as generality increases. The results on typical test problems show that the specificity-based representation can halve the time require for matching but also that binary encoding is about ten times faster on the most difficult problems. Moreover, we extend specificity-based encoding to real-inputs and propose an algorithm that can halve the time require for matching real inputs using an interval-based representation. 

Investigating Restricted Tournament Replacement in ECGA for Non-Stationary Environments

Abstract: This paper investigates the incorporation of restricted tournament replacement (RTR) in the extended compact genetic algorithm (ECGA) for solving problems with non-stationary optima. RTR is a simple yet efficient niching method used to maintain diversity in a population of individuals. While the original version of RTR uses Hamming distance to quantify similarity between individuals, […]

Abstract: This paper investigates the incorporation of restricted tournament replacement (RTR) in the extended compact genetic algorithm (ECGA) for solving problems with non-stationary optima. RTR is a simple yet efficient niching method used to maintain diversity in a population of individuals. While the original version of RTR uses Hamming distance to quantify similarity between individuals, we propose an alternative substructural distance to enforce the niches. The ECGA that restarts the search after a change of environment is compared with the approach of maintaining diversity, using both versions of RTR. Results on several dynamic decomposable test problems demonstrate the usefulness of maintaining diversity throughout the run over the approach of restarting the search from scratch at each change. Furthermore, by maintaining diversity no additional mechanisms are required to detect the change of environment, which is typically a problem-dependent and non-trivial task.

Self-Adaptive Mutation in XCSF

Abstract: Recent advances in XCS technology have shown that self-adaptive mutation can be highly useful to speed-up the evolutionary progress in XCS. Moreover, recent publications have shown that XCS can also be successfully applied to challenging real-valued domains including datamining, function approximation, and clustering. In this paper, we combine these two advances and investigate self-adaptive mutation […]

Abstract: Recent advances in XCS technology have shown that self-adaptive mutation can be highly useful to speed-up the evolutionary progress in XCS. Moreover, recent publications have shown that XCS can also be successfully applied to challenging real-valued domains including datamining, function approximation, and clustering. In this paper, we combine these two advances and investigate self-adaptive mutation in the XCS system for function approximation with hyperellipsoidal condition structures, referred to as XCSF in this paper. It has been shown that XCSF solves function approximation problems with an accuracy, noise robustness, and generalization capability comparable to other statistical machine learning techniques and that XCSF outperforms simple clustering techniques to which linear approximations are added. This paper shows that the right type of self-adaptive mutation can further improve XCSF’s performance solving problems more parameter independent and more reliably. We analyze various types of self-adaptive mutation and show that XCSF with self-adaptive mutation ranges, differentiated for the separate classifier condition values, yields most robust performance results. Future work may further investigate the properties of the self-adaptive values and may integrate advanced self-adaptation techniques.