The GPU on the simulation of cellular computing models

Abstract  
Membrane Computing is a discipline aiming to abstract formal computing models, called membrane systems or P systems, from the structure and functioning of the living cells as well as from the cooperation of cells in tissues, organ…

Abstract  

Membrane Computing is a discipline aiming to abstract formal computing models, called membrane systems or P systems, from the structure and functioning of the living cells as well as from the cooperation of cells in tissues, organs, and
other higher order structures. This framework provides polynomial time solutions to NP-complete problems by trading space
for time, and whose efficient simulation poses challenges in three different aspects: an intrinsic massively parallelism of
P systems, an exponential computational workspace, and a non-intensive floating point nature. In this paper, we analyze the
simulation of a family of recognizer P systems with active membranes that solves the Satisfiability problem in linear time
on different instances of Graphics Processing Units (GPUs). For an efficient handling of the exponential workspace created
by the P systems computation, we enable different data policies to increase memory bandwidth and exploit data locality through
tiling and dynamic queues. Parallelism inherent to the target P system is also managed to demonstrate that GPUs offer a valid
alternative for high-performance computing at a considerably lower cost. Furthermore, scalability is demonstrated on the way
to the largest problem size we were able to run, and considering the new hardware generation from Nvidia, Fermi, for a total
speed-up exceeding four orders of magnitude when running our simulations on the Tesla S2050 server.

  • Content Type Journal Article
  • Category Focus
  • Pages 1-16
  • DOI 10.1007/s00500-011-0716-1
  • Authors
    • José M. Cecilia, Computer Engineering and Technology Department, University of Murcia, 30100 Murcia, Spain
    • José M. García, Computer Engineering and Technology Department, University of Murcia, 30100 Murcia, Spain
    • Ginés D. Guerrero, Computer Engineering and Technology Department, University of Murcia, 30100 Murcia, Spain
    • Miguel A. Martínez-del-Amor, Computer Science and Artificial Intelligence Department, University of Seville, 41012 Seville, Spain
    • Mario J. Pérez-Jiménez, Computer Science and Artificial Intelligence Department, University of Seville, 41012 Seville, Spain
    • Manuel Ujaldón, Computer Architecture Department, University of Malaga, 29071 Malaga, Spain

Speeding up the evaluation phase of GP classification algorithms on GPUs

Abstract  The efficiency of evolutionary algorithms has become a studied problem since it is one of the major weaknesses in these algorithms.
Specifically, when these algorithms are employed for the classification task, the computational tim…

Abstract  

The efficiency of evolutionary algorithms has become a studied problem since it is one of the major weaknesses in these algorithms.
Specifically, when these algorithms are employed for the classification task, the computational time required by them grows
excessively as the problem complexity increases. This paper proposes an efficient scalable and massively parallel evaluation
model using the NVIDIA CUDA GPU programming model to speed up the fitness calculation phase and greatly reduce the computational
time. Experimental results show that our model significantly reduces the computational time compared to the sequential approach,
reaching a speedup of up to 820×. Moreover, the model is able to scale to multiple GPU devices and can be easily extended
to any evolutionary algorithm.

  • Content Type Journal Article
  • Category Focus
  • Pages 1-16
  • DOI 10.1007/s00500-011-0713-4
  • Authors
    • Alberto Cano, Department of Computing and Numerical Analysis, University of Córdoba, 14071 Córdoba, Spain
    • Amelia Zafra, Department of Computing and Numerical Analysis, University of Córdoba, 14071 Córdoba, Spain
    • Sebastián Ventura, Department of Computing and Numerical Analysis, University of Córdoba, 14071 Córdoba, Spain

LGTBase: LARGE-like GlcNAc Transferase Database

Abstract  Information technology has greatly speeded up the discovery in biomedical research. However, most of the modern biomedical
scientists are not familiar with the new technology. To overcome the difficulty, an intelligent information …

Abstract  

Information technology has greatly speeded up the discovery in biomedical research. However, most of the modern biomedical
scientists are not familiar with the new technology. To overcome the difficulty, an intelligent information system is needed
to help scientists in designing and conducting research projects. Previously, we have designed a static knowledge management
system for the LARGE-like glycosyltransferase, a novel GlcNAc transferase involved in human diseases such as muscular dystrophy
and human meningioma, by integrating data from public databases. According the characteristics of protein structure of LARGE
protein family members include an N-terminal cytoplasmic domain, a transmembrane region, a coiled-coil motif and a putative
catalytic domain with the conserved aspartate-any residue-aspartate motif and a conserved protein structural domain. In this
paper, we have described an intelligent information system for the LARGE-like GlcNAc Transferase Database, by setting up an
automatic updating databank for genes of the LARGE family and by integrating several bioinformatics tools which can identify
characteristics structural domains of the protein family.

  • Content Type Journal Article
  • Category Focus
  • Pages 1-9
  • DOI 10.1007/s00500-011-0723-2
  • Authors
    • Kuo-Yuan Hwa, Department of Molecular Science and Engineering, Institute of Organic and Polymeric Materials, Centre for Biomedical Industries, National Taipei University of Technology, Taipei, Taiwan, ROC
    • Wan Man Lin, Institute of Organic and Polymeric Materials, National Taipei University of Technology, Taipei, Taiwan, ROC
    • Chueh-Pai Li, Department of Molecular Science and Engineering, National Taipei University of Technology, Taipei, Taiwan, ROC

Wavelets-based facial expression recognition using a bank of support vector machines

Abstract  A human face does not play its role in the identification of an individual but also communicates useful information about
a person’s emotional state at a particular time. No wonder automatic face expression recognition has become…

Abstract  

A human face does not play its role in the identification of an individual but also communicates useful information about
a person’s emotional state at a particular time. No wonder automatic face expression recognition has become an area of great
interest within the computer science, psychology, medicine, and human–computer interaction research communities. Various feature
extraction techniques based on statistical to geometrical data have been used for recognition of expressions from static images
as well as real-time videos. In this paper, we present a method for automatic recognition of facial expressions from face
images by providing discrete wavelet transform features to a bank of seven parallel support vector machines (SVMs). Each SVM
is trained to recognize a particular facial expression, so that it is most sensitive to that expression. Multi-classification
is achieved by combining multiple SVMs performing binary classification using one-against-all approach. The outputs of all
SVMs are combined using a maximum function. The classification efficiency is tested on static images from the publicly available
Japanese Female Facial Expression database. The experiments using the proposed method demonstrate promising results.

  • Content Type Journal Article
  • Category Focus
  • Pages 1-11
  • DOI 10.1007/s00500-011-0721-4
  • Authors
    • Sidra Batool Kazmi, Department of Computer Science, National University of Computer and Emerging Sciences, A. K. Brohi Road, H-11/4, Islamabad, Pakistan
    • Qurat-ul-Ain, Department of Computer Science, National University of Computer and Emerging Sciences, A. K. Brohi Road, H-11/4, Islamabad, Pakistan
    • M. Arfan Jaffar, Department of Computer Science, National University of Computer and Emerging Sciences, A. K. Brohi Road, H-11/4, Islamabad, Pakistan

Special issue on evolutionary computation on general purpose graphics processing units

Special issue on evolutionary computation on general purpose graphics processing units
Content Type Journal ArticleCategory EditorialPages 1-2DOI 10.1007/s00500-011-0719-yAuthors
José L. Risco-Martín, Department of Computer Architecture and Automa…

Special issue on evolutionary computation on general purpose graphics processing units

  • Content Type Journal Article
  • Category Editorial
  • Pages 1-2
  • DOI 10.1007/s00500-011-0719-y
  • Authors
    • José L. Risco-Martín, Department of Computer Architecture and Automation, Universidad Complutense de Madrid, Madrid, Spain
    • Juan Lanchares, Department of Computer Architecture and Automation, Universidad Complutense de Madrid, Madrid, Spain
    • Carlos A. Coello-Coello, Computer Science Department, CINVESTAV-IPN, Mexico City, Mexico

An artificial bee colony algorithm for the minimum routing cost spanning tree problem

Abstract  Given a connected, weighted, and undirected graph, the minimum routing cost spanning tree problem seeks a spanning tree of
minimum routing cost on this graph, where routing cost of a spanning tree is defined as the sum of the costs…

Abstract  

Given a connected, weighted, and undirected graph, the minimum routing cost spanning tree problem seeks a spanning tree of
minimum routing cost on this graph, where routing cost of a spanning tree is defined as the sum of the costs of the paths
connecting all possible pairs of distinct vertices in that spanning tree. This problem has several important applications
in networks design and computational biology. In this paper, we have proposed an artificial bee colony (ABC) algorithm-based
approach for this problem. We have compared our approach against four best methods reported in the literature—two genetic
algorithms, a stochastic hill climber and a perturbation-based local search. Computational results show the superiority of
our ABC approach over other approaches.

  • Content Type Journal Article
  • Category Original Paper
  • Pages 1-11
  • DOI 10.1007/s00500-011-0711-6
  • Authors
    • Alok Singh, Department of Computer and Information Sciences, University of Hyderabad, Hyderabad, 500046 India
    • Shyam Sundar, Department of Computer and Information Sciences, University of Hyderabad, Hyderabad, 500046 India

Stock trading strategy creation using GP on GPU

Abstract  This paper investigates the speed improvements available when using a graphics processing unit (GPU) for evaluation of individuals
in a genetic programming (GP) environment. An existing GP system is modified to enable parallel eval…

Abstract  

This paper investigates the speed improvements available when using a graphics processing unit (GPU) for evaluation of individuals
in a genetic programming (GP) environment. An existing GP system is modified to enable parallel evaluation of individuals
on a GPU device. Several issues related to implementing GP on GPU are discussed, including how to perform tree-based GP on
a device without recursion support, as well as the effect that proper memory layout can have on speed increases when using
CUDA-enabled nVidia GPU devices. The specific GP implementation is designed to evolve stock trading strategies using technical
analysis indicators. The second goal of this research is to investigate the possible improvement in performance when training
individuals on a larger number of stocks and training days. This increased training size (nearly 100,000 training points)
is enabled due to the speedups realized by GPU evaluation. Several different scenarios were used to test various speed optimizations
of GP evaluation on the GPU device, with a peak speedup factor of over 600 (when compared to sequential evaluation on a 2.4 GHz
CPU). Also, it is found that increasing the number of stocks and the length of the training period can result in higher out-of-training
testing profitability.

  • Content Type Journal Article
  • Category Focus
  • Pages 1-13
  • DOI 10.1007/s00500-011-0717-0
  • Authors
    • Dave McKenney, School of Computer Science, Carleton University, Ottawa, K1S 5B6 Canada
    • Tony White, School of Computer Science, Carleton University, Ottawa, K1S 5B6 Canada

Numerical solution of fuzzy Fredholm integral equations by the Lagrange interpolation based on the extension principle

Abstract  In this paper, a numerical procedure is proposed for solving the fuzzy linear Fredholm integral equations of the second kind
by using Lagrange interpolation based on the extension principle. For this purpose, a numerical algorithm …

Abstract  

In this paper, a numerical procedure is proposed for solving the fuzzy linear Fredholm integral equations of the second kind
by using Lagrange interpolation based on the extension principle. For this purpose, a numerical algorithm is presented, and
two examples are solved by applying this algorithm. Moreover, a theorem is proved to show the convergence of the algorithm
and obtain an upper bound for the distance between the exact and the numerical solutions.

  • Content Type Journal Article
  • Category Original Paper
  • Pages 1-8
  • DOI 10.1007/s00500-011-0706-3
  • Authors
    • M. A. Fariborzi Araghi, Department of Mathematics, Central Tehran Branch, Islamic Azad University, P.O. Box 13185.768, Tehran, Iran
    • N. Parandin, Department of Mathematics, Islamic Azad University of Kermanshah Branch, Kermanshah, Iran

Fuzzy XNOR connectives in fuzzy logic

Abstract  In this paper, a generalized XNOR connective called fuzzy XNOR connective is introduced. First, the definition of fuzzy XNOR
connective is proposed and its properties are analyzed. Then, two forms of fuzzy XNOR connectives are obta…

Abstract  

In this paper, a generalized XNOR connective called fuzzy XNOR connective is introduced. First, the definition of fuzzy XNOR
connective is proposed and its properties are analyzed. Then, two forms of fuzzy XNOR connectives are obtained by the composition
of t-norms, t-conorms and fuzzy negations. Moreover, the relationships between fuzzy XNOR connectives and fuzzy Xor connectives
introduced in Bedregal et al. (Electron Notes Theor Comput Sci 247:5–18, 2009) are discussed. At last, two new kinds of fuzzy implications are constructed by fuzzy XNOR connectives and other connectives,
their main properties are also studied.

  • Content Type Journal Article
  • Category Original Paper
  • Pages 1-9
  • DOI 10.1007/s00500-011-0708-1
  • Authors
    • Yingfang Li, Department of Mathematics, Southwest Jiaotong University, Chengdu, 610031 Sichuan, People’s Republic of China
    • Keyun Qin, Department of Mathematics, Southwest Jiaotong University, Chengdu, 610031 Sichuan, People’s Republic of China
    • Xingxing He, Intelligent Control Development Center, Southwest Jiaotong University, Chengdu, 610031 Sichuan, People’s Republic of China

Robust intelligent backstepping tracking control for wheeled inverted pendulum

Abstract  In this study, a robust intelligent backstepping tracking control (RIBTC) system combined with adaptive output recurrent cerebellar
model articulation controller (AORCMAC) and H
∞ control technique is proposed for wheeled invert…

Abstract  

In this study, a robust intelligent backstepping tracking control (RIBTC) system combined with adaptive output recurrent cerebellar
model articulation controller (AORCMAC) and H
control technique is proposed for wheeled inverted pendulums (WIPs) with unknown system dynamics and external disturbance.
The AORCMAC is a nonlinear adaptive system with simple computation, good generalization capability and fast learning property.
Therefore, the WIP can stand upright when it moves to a designed position stably. In the proposed control system, an AORCMAC
is used to copy an ideal backstepping control, and a robust H
controller is designed to attenuate the effect of the residual approximation errors and external disturbances with desired
attenuation level. Moreover, the all adaptation laws of the RIBTC system are derived based on the Lyapunov stability analysis,
the Taylor linearization technique and H
control theory, so that the stability of the closed-loop system and H
tracking performance can be guaranteed. The proposed control scheme is practical and efficacious for WIPs by simulation results.

  • Content Type Journal Article
  • Category Original Paper
  • Pages 1-12
  • DOI 10.1007/s00500-011-0702-7
  • Authors
    • Chih-Hui Chiu, Department of Electrical Engineering, Yuan-Ze University, Chung-Li, Tao-Yuan, 320 Taiwan, ROC
    • Ya-Fu Peng, Department of Electrical Engineering, Ching-Yun University, Chung-Li, Tao-Yuan, 320 Taiwan, ROC
    • You-Wei Lin, Department of Electrical Engineering, Yuan-Ze University, Chung-Li, Tao-Yuan, 320 Taiwan, ROC