their accumulated inputs. This contrasts with biological neurons, whose transfer functions are dynamic and driven by a rich
internal structure. Our artificial neural network approach, which we call state-enhanced neural networks, uses nodes with dynamic transfer functions based on n-dimensional real-valued internal state. This internal state provides the nodes with memory of past inputs and computations.
The state update rules, which determine the internal dynamics of a node, are optimized by an evolutionary algorithm to fit
a particular task and environment. We demonstrate the effectiveness of the approach in comparison to certain types of recurrent
neural networks using a suite of partially observable Markov decision processes as test problems. These problems involve both
sequence detection and simulated mice in mazes, and include four advanced benchmarks proposed by other researchers.
- Content Type Journal Article
- DOI 10.1007/s12065-009-0017-0
- Authors
- David Montana, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA
- Eric VanWyk, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA
- Marshall Brinn, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA
- Joshua Montana, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA
- Stephen Milligan, BBN Technologies 10 Moulton Street Cambridge MA 02138 USA
- Journal Evolutionary Intelligence
- Online ISSN 1864-5917
- Print ISSN 1864-5909
- Journal Volume Volume 1
- Journal Issue Volume 1, Number 4