Evolution of internal dynamics for neural network nodes

Abstract  Most artificial neural networks have nodes that apply a simple static transfer function, such as a sigmoid or gaussian, to
their accumulated inputs. This contrasts with biological neurons, whose transfer functions are dynamic and d…

Abstract  Most artificial neural networks have nodes that apply a simple static transfer function, such as a sigmoid or gaussian, to
their accumulated inputs. This contrasts with biological neurons, whose transfer functions are dynamic and driven by a rich
internal structure. Our artificial neural network approach, which we call state-enhanced neural networks, uses nodes with dynamic transfer functions based on n-dimensional real-valued internal state. This internal state provides the nodes with memory of past inputs and computations.
The state update rules, which determine the internal dynamics of a node, are optimized by an evolutionary algorithm to fit
a particular task and environment. We demonstrate the effectiveness of the approach in comparison to certain types of recurrent
neural networks using a suite of partially observable Markov decision processes as test problems. These problems involve both
sequence detection and simulated mice in mazes, and include four advanced benchmarks proposed by other researchers.

  • Content Type Journal Article
  • DOI 10.1007/s12065-009-0017-0
  • Authors
    • David Montana, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA
    • Eric VanWyk, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA
    • Marshall Brinn, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA
    • Joshua Montana, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA
    • Stephen Milligan, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA

Evolutionary computing and boron

Today’s New York Times features an article describing a new discovery about the element boron, made in part by evolutionary computing. Full details of the discovery are provided in a January 29, 2009 letter in Nature (subscription required). They (Oganov et al.) used a special purpose evolutionary algorithm called USPEX that is not really described in the Nature piece, but it is described elsewhere including here.

Today’s New York Times features an article describing a new discovery about the element boron, made in part by evolutionary computing. Full details of the discovery are provided in a January 29, 2009 letter in Nature (subscription required). They (Oganov et al.) used a special purpose evolutionary algorithm called USPEX that is not really described in the Nature piece, but it is described elsewhere including here.

Contents of Volume 10, Number 1

From the Introduction to Volume 10, Number 1:
The present issue includes three full research articles and two book reviews.
In “Scaling of Program Functionality” W. B. Langdon provides a novel theoretical analysis of the relations between size and functionality for several classes of programs. Many aspects of his analysis apply to all possible systems that search for computer programs, but Dr. Langdon also describes specific implications of his analysis for genetic programming and provides experimental confirmation of his results.
In “An improved representation for evolving programs” M. S. Withall, C.J. Hinde, and R. G. Stone describe a new representation for evolving programs that combines features of traditional linear and tree-based representations. They present the results of several experiments using their new representation and they discuss implications for the scalability of genetic programming to more complex problems.
In “Solution of matrix Riccati differential equation for nonlinear singular system using genetic programming” P. Balasubramaniam and A. Vincent Antony Kumar show how genetic programming can be used to solve differential equations of a particular important class. They compare the genetic programming approach to the traditional Runge Kutta method and they provide experimental confirmation of efficiency improvements.
The book reviews in this issue, edited by W. B. Langdon, cover two edited volumes: The Mechanical Mind in History, which was edited by P. Husbands, O. Holland and M. Wheeler (reviewed by P. Collet), and Evolutionary Computation in Practice: Studies in Computational Intelligence, which was edited by T. Yu, D. Davis, C. Baydar, and R. Roy (reviewed by L. M. Deschaine).
From the Introduction to Volume 10, Number 1:
The present issue includes three full research articles and two book reviews.
In “Scaling of Program Functionality” W. B. Langdon provides a novel theoretical analysis of the relations between size and functionality for several classes of programs. Many aspects of his analysis apply to all possible systems that search for computer programs, but Dr. Langdon also describes specific implications of his analysis for genetic programming and provides experimental confirmation of his results.
In “An improved representation for evolving programs” M. S. Withall, C.J. Hinde, and R. G. Stone describe a new representation for evolving programs that combines features of traditional linear and tree-based representations. They present the results of several experiments using their new representation and they discuss implications for the scalability of genetic programming to more complex problems.
In “Solution of matrix Riccati differential equation for nonlinear singular system using genetic programming” P. Balasubramaniam and A. Vincent Antony Kumar show how genetic programming can be used to solve differential equations of a particular important class. They compare the genetic programming approach to the traditional Runge Kutta method and they provide experimental confirmation of efficiency improvements.
The book reviews in this issue, edited by W. B. Langdon, cover two edited volumes: The Mechanical Mind in History, which was edited by P. Husbands, O. Holland and M. Wheeler (reviewed by P. Collet), and Evolutionary Computation in Practice: Studies in Computational Intelligence, which was edited by T. Yu, D. Davis, C. Baydar, and R. Roy (reviewed by L. M. Deschaine).

Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre

Abstract: Data-intensive computing has positioned itself as a valuable programming paradigm to efficiently approach problems requiring processing very large volumes of data. This paper presents a pilot study about how to apply the data-intensive computing paradigm to evolutionary computation algorithms. Two representative cases—selectorecombinative genetic algorithms and estimation of distribution algorithms—are presented, analyzed, discussed. This study […]

Abstract: Data-intensive computing has positioned itself as a valuable programming paradigm to efficiently approach problems requiring processing very large volumes of data. This paper presents a pilot study about how to apply the data-intensive computing paradigm to evolutionary computation algorithms. Two representative cases—selectorecombinative genetic algorithms and estimation of distribution algorithms—are presented, analyzed, discussed. This study shows that equivalent data-intensive computing evolutionary computation algorithms can be easily developed, providing robust and scalable algorithms for the multicore-computing era. Experimental results show how such algorithms scale with the number of available cores without further modification.

Acknowledgment

Acknowledgment
Content Type Journal ArticleDOI 10.1007/s10710-008-9078-6Authors
Lee Spector, Hampshire College School of Cognitive Science Amherst MA 01002 USA

Journal Genetic Programming and Evolvable MachinesOnline ISSN 1573-7632Print ISSN …

Acknowledgment

  • Content Type Journal Article
  • DOI 10.1007/s10710-008-9078-6
  • Authors
    • Lee Spector, Hampshire College School of Cognitive Science Amherst MA 01002 USA

HAIS’09: Special session on “Knowledge Extraction based on Evolutionary Learning”

CALL FOR PAPERS
The special session on Knowledge Extraction based on Evolutionary Learning in the 4th Hybrid Artificial Intelligent Systems (HAIS 2009) aims at bringing together researchers in Knowledge Extraction based on Evolutionary Learning to discuss new trends for the research in the field.
This special session intends to be a forum for researchers to […]

CALL FOR PAPERS
The special session on Knowledge Extraction based on Evolutionary Learning in the 4th Hybrid Artificial Intelligent Systems (HAIS 2009) aims at bringing together researchers in Knowledge Extraction based on Evolutionary Learning to discuss new trends for the research in the field.
This special session intends to be a forum for researchers to […]

Genetic-based approach for cue phrase selection in dialogue act recognition

Abstract  Automatic cue phrase selection is a crucial step for designing a dialogue act recognition model using machine learning techniques.
The approaches, currently used, are based on specific type of feature selection approaches, called r…

Abstract  Automatic cue phrase selection is a crucial step for designing a dialogue act recognition model using machine learning techniques.
The approaches, currently used, are based on specific type of feature selection approaches, called ranking approaches. Despite
their computational efficiency for high dimensional domains, they are not optimal with respect to relevance and redundancy.
In this paper we propose a genetic-based approach for cue phrase selection which is, essentially, a variable length genetic
algorithm developed to cope with the high dimensionality of the domain. We evaluate the performance of the proposed approach
against several ranking approaches. Additionally, we assess its performance for the selection of cue phrases enriched by phrase’s
type and phrase’s position. The results provide experimental evidences on the ability of the genetic-based approach to handle
the drawbacks of the ranking approaches and to exploit cue’s type and cue’s position information to improve the selection.
Furthermore, we validate the use of the genetic-based approach for machine learning applications. We use selected sets of
cue phrases for building a dynamic Bayesian networks model for dialogue act recognition. The results show its usefulness for
machine learning applications.

  • Content Type Journal Article
  • DOI 10.1007/s12065-008-0016-6
  • Authors
    • Anwar Ali Yahya, University Putra Malaysia Intelligent System and Robotics Laboratory, Institute of Advanced Technology 43400 UPM Serdang Selangor Malaysia
    • Abd Rahman Ramli, University Putra Malaysia Intelligent System and Robotics Laboratory, Institute of Advanced Technology 43400 UPM Serdang Selangor Malaysia

Dynamic limits for bloat control in genetic programming and a review of past and current bloat theories

Abstract  Bloat is an excess of code growth without a corresponding improvement in fitness. This is a serious problem in Genetic Programming,
often leading to the stagnation of the evolutionary process. Here we provide an extensive review of…

Abstract  Bloat is an excess of code growth without a corresponding improvement in fitness. This is a serious problem in Genetic Programming,
often leading to the stagnation of the evolutionary process. Here we provide an extensive review of all the past and current
theories regarding why bloat occurs. After more than 15 years of intense research, recent work is shedding new light on what
may be the real reasons for the bloat phenomenon. We then introduce Dynamic Limits, our new approach to bloat control. It
implements a dynamic limit that can be raised or lowered, depending on the best solution found so far, and can be applied
either to the depth or size of the programs being evolved. Four problems were used as a benchmark to study the efficiency
of Dynamic Limits. The quality of the results is highly dependent on the type of limit used: depth or size. The depth variants
performed very well across the set of problems studied, achieving similar fitness to the baseline technique while using significantly
smaller trees. Unlike many other methods available so far, Dynamic Limits does not require specific genetic operators, modifications
in fitness evaluation or different selection schemes, nor does it add any parameters to the search process. Furthermore, its
implementation is simple and its efficiency does not rely on the usage of a static upper limit. The results are discussed
in the context of the newest bloat theory.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9075-9
  • Authors
    • Sara Silva, CISUC, University of Coimbra, Polo II – Pinhal de Marrocos 3030-290 Coimbra Portugal
    • Ernesto Costa, CISUC, University of Coimbra, Polo II – Pinhal de Marrocos 3030-290 Coimbra Portugal

Editorial introduction

Editorial introduction
Content Type Journal ArticleCategory EditorialDOI 10.1007/s10710-008-9077-7Authors
Lee Spector, Hampshire College School of Cognitive Science Amherst MA 01002 USA

Journal Genetic Programming and Evolvable MachinesOnline…

Editorial introduction

  • Content Type Journal Article
  • Category Editorial
  • DOI 10.1007/s10710-008-9077-7
  • Authors
    • Lee Spector, Hampshire College School of Cognitive Science Amherst MA 01002 USA

Using enhanced genetic programming techniques for evolving classifiers in the context of medical diagnosis

Abstract  There are several data based methods in the field of artificial intelligence which are nowadays frequently used for analyzing
classification problems in the context of medical applications. As we show in this paper, the application…

Abstract  There are several data based methods in the field of artificial intelligence which are nowadays frequently used for analyzing
classification problems in the context of medical applications. As we show in this paper, the application of enhanced evolutionary
computation techniques to classification problems has the potential to evolve classifiers of even higher quality than those
trained by standard machine learning methods. On the basis of five medical benchmark classification problems taken from the
UCI repository as well as the Melanoma data set (prepared by members of the Department of Dermatology of the Medical University Vienna) we document that the enhanced
genetic programming approach presented here is able to produce comparable or even better results than linear modeling methods,
artificial neural networks, kNN classification, support vector machines and also various genetic programming approaches.

  • Content Type Journal Article
  • Category Original Paper
  • DOI 10.1007/s10710-008-9076-8
  • Authors
    • Stephan M. Winkler, Research Center Hagenberg, Upper Austria University of Applied Sciences Softwarepark 11 4232 Hagenberg Austria
    • Michael Affenzeller, Upper Austria University of Applied Sciences Department of Software Engineering Softwarepark 11 4232 Hagenberg Austria
    • Stefan Wagner, Upper Austria University of Applied Sciences Department of Software Engineering Softwarepark 11 4232 Hagenberg Austria