Parallel and Distributed Computational Intelligence book is out for pre-order

“Parallel and Distributed Computational Intelligence” edited by Francisco Fernández de Vega & Erick Cantú-Paz and published by Springer is out for pre-order. The first chapter “When Huge is Routine: Scaling Genetic Algorithms and Estimation of Distribution Algorithms via Data-Intensive Computing”

“Parallel and Distributed Computational Intelligence” edited by Francisco Fernández de Vega & Erick Cantú-Paz and published by Springer is out for pre-order. The first chapter “When Huge is Routine: Scaling Genetic Algorithms and Estimation of Distribution Algorithms via Data-Intensive Computing” of the book was written together with coauthors Abhishek Verma, Roy Campbell, and David E. Goldberg describing how data-intensive computing can help push the size of problems that GAs and EDAs can address. You may find the abstact of the book below.

Abstract:

The growing success of biologically inspired algorithms in solving large and complex problems has spawned many interesting areas of research. Over the years, one of the mainstays in bio-inspired research has been the exploitation of parallel and distributed environments to speedup computations and to enrich the algorithms. From the early days of research on bio-inspired algorithms, their inherently parallel nature was recognized and different parallelization approaches have been explored. Parallel algorithms promise reductions in execution time and open the door to solve increasingly larger problems. But parallel platforms also inspire new bio-inspired parallel algorithms that, while similar to their sequential counterparts, explore search spaces differently and offer improvements in solution quality.

The objective in editing this book was to assemble a sample of the best work in parallel and distributed biologically inspired algorithms. The editors invited researchers in different domains to submit their work. They aimed to include diverse topics to appeal to a wide audience. Some of the chapters summarize work that has been ongoing for several years, while others describe more recent exploratory work. Collectively, these works offer a global snapshot of the most recent efforts of bioinspired algorithms’ researchers aiming at profiting from parallel and distributed computer architectures—including GPUs, Clusters, Grids, volunteer computing and p2p networks as well as multi-core processors. This volume will be of value to a wide set of readers, including, but not limited to specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to figure out new paths towards the future of computational intelligence.

CIG-2009 proceedings available online

Pier Luca Lanzi just sent a note saying saying that the proceedings of the 2009 Symposium on Computational Intelligence and Games (CIG-2009) are now available on-line at
http://www.ieee-cig.org/cig-2009/Proceedings/
Related Posts

Pier Luca Lanzi just sent a note saying saying that the proceedings of the 2009 Symposium on Computational Intelligence and Games (CIG-2009) are now available on-line at

http://www.ieee-cig.org/cig-2009/Proceedings/

Reinforcement Learning, Logic and Evolutionary Computation

Drew Mellor is pleased to announce the publication of his new LCS book. Reinforcement Learning, Logic and Evolutionary Computation: A Learning Classifier System Approach to Relational Reinforcement Learning, published by Lambert Academic Publishing (ISBN 978-3-8383-0196-9).

Abstract Reinforcement learning (RL) consists of methods that automatically adjust behaviour based on numerical rewards and penalties. While use of the attribute-value framework is widespread in RL, it has limited expressive power. Logic languages, such as first-order logic, provide a more expressive framework, and their use in RL has led to the field of relational RL. This thesis develops a system for relational RL based on learning classifier systems (LCS). In brief, the system generates, evolves, and evaluates a population of condition-action rules, which take the form of definite clauses over first-order logic. Adopting the LCS approach allows the resulting system to integrate several desirable qualities: model-free and “tabula rasa” learning; a Markov Decision Process problem model; and importantly, support for variables as a principal mechanism for generalisation. The utility of variables is demonstrated by the system’s ability to learn genuinely scalable behaviour – ! behaviour learnt in small environments that translates to arbitrarily large versions of the environment without the need for retraining.

Learning Classifier Systems, Springer LNAI 4998

Learning Classifier Systems:
10th International Workshop, IWLCS 2006, Seattle, MA, USA, July 8, 2006, and 11th International Workshop, IWLCS 2007, London, UK, July 8, 2007, Revised Selected Papers
Series: Lecture Notes in Computer Science
Subseries: Lecture Notes in Artificial Intelligence , Vol. 4998
Bacardit, J.; Bernadó-Mansilla, E.; Butz, M.V.; Kovacs, T.; Llorà, X.; Takadama, K. (Eds.)
2008, X, 307 p., Softcover
ISBN: 978-3-540-88137-7


Abstract

This book constitutes the thoroughly refereed joint post-conference proceedings of two consecutive International Workshops on Learning Classifier Systems that took place in Seattle, WA, USA in July 2006, and in London, UK, in July 2007 – all hosted by the Genetic and Evolutionary Computation Conference, GECCO. The 14 revised full papers presented were carefully reviewed and selected from the workshop contributions. The papers are organized in topical sections on knowledge representation, analysis of the system, mechanisms, new directions, as well as applications.

Design and Analysis of Learning Classifier Systems: A Probabilistic Approach

The book Design and Analysis of Learning Classifier Systems: A Probabilistic Approach by Jan Drugowitsch presents a machine learning approach to Learning Classifier Systems. In the author’s own words:

This book provides a comprehensive introduction to the design and analysis of Learning Classifier Systems (LCS) from the perspective of machine learning. LCS are a family of methods for handling unsupervised learning, supervised learning and sequential decision tasks by decomposing larger problem spaces into easy-to-handle subproblems. Contrary to commonly approaching their design and analysis from the viewpoint of evolutionary computation, this book instead promotes a probabilistic model-based approach, based on their defining question “What is an LCS supposed to learn?”. Systematically following this approach, it is shown how generic machine learning methods can be applied to design LCS algorithms from the first principles of their underlying probabilistic model, which is in this book  for illustrative purposes  closely related to the currently prominent XCS classifier system. The approach is holistic in the sense that the uniform goal-driven design metaphor essentially covers all aspects of LCS and puts them on a solid foundation, in addition to enabling the transfer of the theoretical foundation of the various applied machine learning methods onto LCS. Thus, it does not only advance the analysis of existing LCS but also puts forward the design of new LCS within that same framework.

The Entrepreneurial Engineer

Entrepreneurial times call for The Entrepreneurial Engineer In an age when technology and business are merging as never before, today’s engineers need skills matched with the times. Today, career success as an engineer is determined as much by an ability … Continue reading

Entrepreneurial times call for The Entrepreneurial Engineer

In an age when technology and business are merging as never before, today’s engineers need skills matched with the times. Today, career success as an engineer is determined as much by an ability to communicate with coworkers, sell ideas, and manage time as by talent at manipulating a Laplace transform, coding a Java(r) object, or analyzing a statically indeterminate structure.

This book covers those nontechnical skills needed by today’s entrepreneurial engineers who mix strong technical know-how, business and organizational prowess, and an alert eye for opportunity. Author David Goldberg unlocks the keys to ten core competencies at the heart of what entrepreneurial engineers need to master to be effective in a fast-moving world of deals, teams, startups, and innovating corporations. You’ll discover how to:

  • Feel the essence-and the joys-of engineering
  • Examine personal motivation and set goals
  • Master time management and organization
  • Write fast and well under pressure
  • Prepare and deliver effective presentations
  • Understand and practice good human relations
  • Act ethically in matters large, small, and engineering
  • Assess technology opportunities
  • Understand teams, leadership, culture, and the organization of organizations

Genetic Algorithms in Search, Optimization, and Machine Learning

Reviews from amazon.com: David Goldberg’s Genetic Algorithms in Search, Optimization and Machine Learning is by far the bestselling introduction to genetic algorithms. Goldberg is one of the preeminent researchers in the field–he has published over 100 research articles on genetic … Continue reading

Reviews from amazon.com:
David Goldberg’s Genetic Algorithms in Search, Optimization and Machine Learning is by far the bestselling introduction to genetic algorithms. Goldberg is one of the preeminent researchers in the field–he has published over 100 research articles on genetic algorithms and is a student of John Holland, the father of genetic algorithms–and his deep understanding of the material shines through. The book contains a complete listing of a simple genetic algorithm in Pascal, which C programmers can easily understand. The book covers all of the important topics in the field, including crossover, mutation, classifier systems, and fitness scaling, giving a novice with a computer science background enough information to implement a genetic algorithm and describe genetic algorithms to a friend.

Goldberg, David E.

Advances at the frontier of LCS: LNCS 4399

“Advances at the frontier of Learning Classifier Systems” has been shipped to Springer for the final stages of editing and printing. The volume is going to be printed as Springer’s LNCS 4399 volume. When we started editing this volume, we faced the choice of organizing the contents in a purely chronological fashion or as a sequence of related topics that help walk the reader across the different areas. In the end we decided to organize the contents by area, breaking a little the time-line. This was not a simple endeavor as we could organize the material using multiple criteria. The taxonomy below is our humble effort to provide a coherent grouping. Needless to say, some works may fall in more than one category. Below, you may find the tentative table of contents of the volume. It may change a little bit, but we will keep you posted as soon as we learn from Springer.

Part I. Knowledge representation

  • 1. Analyzing Parameter Sensitivity and Classifier Representations for Real-valued XCS
    by Atsushi Wada, Keiki Takadama, Katsunori Shimohara, and Osamu Katai
    4399 – 001
  • 2. Use of Learning Classifier System for Inferring Natural Language Grammar
    by Olgierd Unold and Grzegorz Dabrowski
    4399 – 018
  • 3. Backpropagation in Accuracy-based Neural Learning Classifier Systems
    by Toby O’Hara and Larry Bull
    4399 – 026
  • 4. Binary Rule Encoding Schemes: A Study Using The Compact Classifier System
    by Xavier Llorà, Kumara Sastry , and David E. Goldberg
    4399 – 041

Part II. Mechanisms

  • 5. Bloat control and generalization pressure using the minimum description length principle for a Pittsburgh approach Learning Classifier System
    by Jaume Bacardit and Josep Maria Garrell
    4399 – 061
  • 6. Post-processing Clustering to Decrease Variability in XCS Induced Rulesets
    by Flavio Baronti, Alessandro Passaro, and Antonina Starita
    4399 – 081
  • 7. LCSE: Learning Classifier System Ensemble for Incremental Medical Instances
    by Yang Gao, Joshua Zhexue Huang, Hongqiang Rong, and Da-qian Gu
    4399 – 094
  • 8. Effect of Pure Error-Based Fitness in XCS
    by Martin V. Butz , David E. Goldberg, and Pier Luca Lanzi
    4399 – 105
  • 9. A Fuzzy System to Control Exploration Rate in XCS
    by Ali Hamzeh and Adel Rahmani
    4399 – 116
  • 10. Counter Example for Q-bucket-brigade under Prediction Problema
    by Atsushi Wada, Keiki Takadama, and Katsunori Shimohara
    4399 – 130
  • 11. An Experimental Comparison between ATNoSFERES and ACS
    by Samuel Landau, Olivier Sigaud, Sébastien Picault, and Pierre Gérard
    4399 – 146
  • 12. The Class Imbalance Problem in UCS Classifier System: A Preliminary Study
    by Albert Orriols-Puig and Ester Bernadó-Mansilla
    4399 – 164
  • 13. Three Methods for Covering Missing Input Data in XCS
    by John H. Holmes, Jennifer A. Sager, and Warren B. Bilker
    4399 – 184

Part III. New Directions

  • 14. A Hyper-Heuristic Framework with XCS: Learning to Create Novel Problem-Solving Algorithms Constructed from Simpler Algorithmic Ingredients
    by Javier G. Marín-Blázquez and Sonia Schulenburg
    4399 – 197
  • 15. Adaptive value function approximations in classifier systems
    by Lashon B. Booker
    4399 – 224
  • 16. Three Architectures for Continuous Action
    by Stewart W. Wilson
    4399 – 244
  • 17. A Formal Relationship Between Ant Colony Optimizers and Classifier Systems
    by Lawrence Davis
    4399 – 263
  • 18. Detection of Sentinel Predictor-Class Associations with XCS: A Sensitivity Analysis
    by John H. Holmes
    4399 – 276

Part IV. Application-oriented research and tools

  • 19. Data Mining in Learning Classifier Systems: Comparing XCS with GAssist
    by Jaume Bacardit and Martin V. Butz
    4399 – 290
  • 20. Improving the Performance of a Pittsburgh Learning Classifier System Using a Default Rule
    by Jaume Bacardit, David E. Goldberg, and Martin V. Butz
    4399 – 299
  • 21. Using XCS to Describe Continuous-Valued Problem Spaces
    by David Wyatt, Larry Bull, and Ian Parmee
    4399 – 318
  • 22. The EpiXCS Workbench: A Tool for Experimentation and Visualization
    by John H. Holmes and Jennifer A. Sager
    4399 – 343

Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications (Studies in Computational Intelligence)

This book focuses like a laser beam on one of the hottest topics in evolutionary computation over the last decade or so: estimation of distribution algorithms (EDAs). EDAs are an important current technique that is leading to breakthroughs in genetic … Continue reading

This book focuses like a laser beam on one of the hottest topics in evolutionary computation over the last decade or so: estimation of distribution algorithms (EDAs). EDAs are an important current technique that is leading to breakthroughs in genetic and evolutionary computation and in optimization more generally. I’m putting Scalable Optimization via Probabilistic Modeling in a prominent place in my library, and I urge you to do so as well. This volume summarizes the state of the art at the same time it points to where that art is going. Buy it, read it, and take its lessons to heart.

David E Goldberg, University of Illinois at Urbana-Champaign

This book is an excellent compilation of carefully selected topics in estimation of distribution algorithms—search algorithms that combine ideas from evolutionary algorithms and machine learning. The book covers a broad spectrum of important subjects ranging from design of robust and scalable optimization algorithms to efficiency enhancements and applications of these algorithms. The book should be of interest to theoreticians and practitioners alike, and is a must-have resource for those interested in stochastic optimization in general, and genetic and evolutionary algorithms in particular.
John R. Koza, Stanford University

This edited book portrays population-based optimization algorithms and applications, covering the entire gamut of optimization problems having single and multiple objectives, discrete and continuous variables, serial and parallel computations, and simple and complex function models. Anyone interested in population-based optimization methods, either knowingly or unknowingly, use some form of an estimation of distribution algorithm (EDA). This book is an eye-opener and a must-read text, covering easy-to-read yet erudite articles on established and emerging EDA methodologies from real experts in the field.
Kalyanmoy Deb, Indian Institute of Technology Kanpur

This book is an excellent comprehensive resource on estimation of distribution algorithms. It can serve as the primary EDA resource for practitioner or researcher. The book includes chapters from all major contributors to EDA state-of-the-art and covers the spectrum from EDA design to applications. These algorithms strategically combine the advantages of genetic and evolutionary computation with the advantages of statistical, model building machine learning techniques. EDAs are useful to solve classes of difficult real-world problems in a robust and scalable manner.
Una-May O’Reilly, Massachusetts Institute of Technology

Machine-learning methods continue to stir the public’s imagination due to its futuristic implications. But, probability-based optimization methods can have great impact now on many scientific multiscale and engineering design problems, especially true with use of efficient and competent genetic algorithms (GA) which are the basis of the present volume. Even though efficient and competent GAs outperform standard techniques and prevent negative issues, such as solution stagnation, inherent in the older but more well-known GAs, they remain less known or embraced in the scientific and engineering communities. To that end, the editors have brought together a selection of experts that (1) introduce the current methodology and lexicography of the field with illustrative discussions and highly useful references, (2) exemplify these new techniques that dramatic improve performance in provable hard problems, and (3) provide real-world applications of these techniques, such as antenna design. As one who has strayed into the use of genetic algorithms and genetic programming for multiscale modeling in materials science, I can say it would have been personally more useful if this would have come out five years ago, but, for my students, it will be a boon.
Duane D. Johnson, University of Illinois at Urbana-Champaign

Scalable optimization via probabilistic modeling: From algorithms to applications

SOPM

The book “Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications” edited by Martin Pelikan, Kumara Sastry, and Erick Cantu-Paz has just been published by Springer.

Estimation of distribution algorithms combine evolutionary computation and machine learning to provide a class of robust and scalable optimization techniques applicable to broad classes of difficult problems. Scalable optimization via Probabilistic Modeling compiles articles by some of the leading experts in academia and industry that range from design and analysis to efficiency enhancement and real-world applications of estimation of distribution algorithms. The book is written for the general audience and should be of interest for optimization researchers and practitioners alike.A sample chapter can be downloaded here and more Information can be found at http://medal.cs.umsl.edu/scalable-optimization-book/