Validating criteria with imprecise data in the case of trapezoidal representations

Abstract  We are interested in the issue of determining an alternative’s satisfaction to a criterion when the alternative’s associated
attribute value is imprecise. We introduce two approaches to the determination of criteria satisfactio…

Abstract  

We are interested in the issue of determining an alternative’s satisfaction to a criterion when the alternative’s associated
attribute value is imprecise. We introduce two approaches to the determination of criteria satisfaction in this uncertain
environment, one based on the idea of containment and the other on the idea of possibility. We are particularly interested
in the case in which the imprecise data is expressed in terms of a trapezoidal type distribution. We provide an algorithmic
solution to this problem enabling it to be efficiently implemented in a digital environment. A number of examples are provided
illustrating our algorithms.

  • Content Type Journal Article
  • Pages 1-12
  • DOI 10.1007/s00500-010-0569-z
  • Authors
    • Ronald R. Yager, Iona College Machine Intelligence Institute New Rochelle NY 10801 USA

Detecting anomalies from high-dimensional wireless network data streams: a case study

Abstract  In this paper, we study the problem of anomaly detection in wireless network streams. We have developed a new technique, called
Stream Projected Outlier deTector (SPOT), to deal with the problem of anomaly detection from multi-dime…

Abstract  

In this paper, we study the problem of anomaly detection in wireless network streams. We have developed a new technique, called
Stream Projected Outlier deTector (SPOT), to deal with the problem of anomaly detection from multi-dimensional or high-dimensional
data streams. We conduct a detailed case study of SPOT in this paper by deploying it for anomaly detection from a real-life
wireless network data stream. Since this wireless network data stream is unlabeled, a validating method is thus proposed to
generate the ground-truth results in this case study for performance evaluation. Extensive experiments are conducted and the
results demonstrate that SPOT is effective in detecting anomalies from wireless network data streams and outperforms existing
anomaly detection methods.

  • Content Type Journal Article
  • Pages 1-21
  • DOI 10.1007/s00500-010-0575-1
  • Authors
    • Ji Zhang, University of Southern Queensland Toowoomba QLD Australia
    • Qigang Gao, Dalhousie University Halifax NS Canada
    • Hai Wang, Saint Mary’s University Halifax NS Canada
    • Hua Wang, University of Southern Queensland Toowoomba QLD Australia

Case study of inaccuracies in the granulation of decision trees

Abstract  Cybernetics studies information process in the context of interaction with physical systems. Because such information is sometimes
vague and exhibits complex interactions; it can only be discerned using approximate representations….

Abstract  

Cybernetics studies information process in the context of interaction with physical systems. Because such information is sometimes
vague and exhibits complex interactions; it can only be discerned using approximate representations. Machine learning provides
solutions that create approximate models of information and decision trees are one of its main components. However, decision
trees are susceptible to information overload and can get overly complex when a large amount of data is inputted in them.
Granulation of decision tree remedies this problem by providing the essential structure of the decision tree, which can decrease
its utility. To evaluate the relationship that exists between granulation and decision tree complexity, data uncertainty and
prediction accuracy, the deficiencies obtained by nursing homes during annual inspections were taken as a case study. Using
rough sets, three forms of granulation were performed: (1) attribute grouping, (2) removing insignificant attributes and (3)
removing uncertain records. Attribute grouping significantly reduces tree complexity without having any strong effect upon
data consistency and accuracy. On the other hand, removing insignificant features decrease data consistency and tree complexity,
while increasing the error in prediction. Finally, decrease in the uncertainty of the dataset results in an increase in accuracy
and has no impact on tree complexity.

  • Content Type Journal Article
  • Pages 1-8
  • DOI 10.1007/s00500-010-0587-x
  • Authors
    • Salman Badr, University of Nottingham School of Computer Science, Faculty of Science Malaysia Campus 43500 Semenyih Malaysia
    • Andrzej Bargiela, University of Nottingham School of Computer Science, Faculty of Science Malaysia Campus 43500 Semenyih Malaysia

The sufficient and necessary condition for chance distribution of bifuzzy variable

Abstract  Fuzzy sets and fuzzy variables have undergone several different extensions overtime. One of them involved including a “bifuzzy
variable” as a fuzzy element for describing the more complete systems. The properties of bifuzzy var…

Abstract  

Fuzzy sets and fuzzy variables have undergone several different extensions overtime. One of them involved including a “bifuzzy
variable” as a fuzzy element for describing the more complete systems. The properties of bifuzzy variable were obtained by
introducing the concept of “chance distribution”. In this paper, we will present a sufficient and necessary condition for
chance distribution of bifuzzy variable. Here we present a constructive proof base on credibility theory for the sufficient
part.

  • Content Type Journal Article
  • Pages 1-5
  • DOI 10.1007/s00500-010-0567-1
  • Authors
    • Zhongfeng Qin, Beihang University School of Economics and Management Beijing 100191 China
    • Xiang Li, Beijing Jiaotong University The State Key Laboratory of Rail Traffic Control and Safety Beijing 100044 China

Using a genetic algorithm to design 3D solar panels

http://www.flickr.com/photos/oregondot/ / CC BY 2.0
Several weeks ago MSNBC published an article Origami boosts solar panel productivity, which discusses the design of 3D solar panels. A genetic algorithm is used in the discussed work to find the optimal shape of the 3D solar panels. From the original article:

Assuming a roughly 1,075-square-foot area (100 square meters), flat […]

Solar Panel

Several weeks ago MSNBC published an article Origami boosts solar panel productivity, which discusses the design of 3D solar panels. A genetic algorithm is used in the discussed work to find the optimal shape of the 3D solar panels. From the original article:

Assuming a roughly 1,075-square-foot area (100 square meters), flat solar panels would generate roughly 50 kilowatt-hours daily. In comparison, the best 3-D structures the researchers came up with — jagged clusters of 64 triangles — could harvest more than 60 kilowatt-hours daily if the devices were 6.5-feet high (2 meters) and up to 120 kilowatt-hours daily if the designs was roughly 33-feet high (10 meters).

Granular computing based on fuzzy similarity relations

Abstract  Rough sets and fuzzy rough sets serve as important approaches to granular computing, but the granular structure of fuzzy rough
sets is not as clear as that of classical rough sets since lower and upper approximations in fuzzy rough…

Abstract  

Rough sets and fuzzy rough sets serve as important approaches to granular computing, but the granular structure of fuzzy rough
sets is not as clear as that of classical rough sets since lower and upper approximations in fuzzy rough sets are defined
in terms of membership functions, while lower and upper approximations in classical rough sets are defined in terms of union
of some basic granules. This limits further investigation of the existing fuzzy rough sets. To bring to light the innate granular
structure of fuzzy rough sets, we develop a theory of granular computing based on fuzzy relations in this paper. We propose
the concept of granular fuzzy sets based on fuzzy similarity relations, investigate the properties of the proposed granular
fuzzy sets using constructive and axiomatic approaches, and study the relationship between granular fuzzy sets and fuzzy relations.
We then use the granular fuzzy sets to describe the granular structures of lower and upper approximations of a fuzzy set within
the framework of granular computing. Finally, we characterize the structure of attribute reduction in terms of granular fuzzy
sets, and two examples are also employed to illustrate our idea in this paper.

  • Content Type Journal Article
  • Pages 1-12
  • DOI 10.1007/s00500-010-0583-1
  • Authors
    • Chen Degang, North China Electric Power University Department of Mathematics and Physics Beijing 102206 People’s Republic of China
    • Yang Yongping, North China Electric Power University Beijing Key Laboratory of Safety and Clean Utilization of Energy Beijing 102206 China
    • Wang Hui, University of Ulster Faculty of Engineering, School of Computing and Mathematics Jordanstown Northern Ireland, UK

High speed detection of retinal blood vessels in fundus image using phase congruency

Abstract  Detection of blood vessels in retinal fundus image is the preliminary step to diagnose several retinal diseases. There exist
several methods to automatically detect blood vessels from retinal image with the aid of different computa…

Abstract  

Detection of blood vessels in retinal fundus image is the preliminary step to diagnose several retinal diseases. There exist
several methods to automatically detect blood vessels from retinal image with the aid of different computational methods.
However, all these methods require lengthy processing time. The method proposed here acquires binary vessels from a RGB retinal
fundus image in almost real time. Initially, the phase congruency of a retinal image is generated, which is a soft-classification
of blood vessels. Phase congruency is a dimensionless quantity that is invariant to changes in image brightness or contrast;
hence, it provides an absolute measure of the significance of feature points. This experiment acquires phase congruency of
an image using Log-Gabor wavelets. To acquire a binary segmentation, thresholds are applied on the phase congruency image.
The process of determining the best threshold value is based on area under the relative operating characteristic (ROC) curve.
The proposed method is able to detect blood vessels in a retinal fundus image within 10 s on a PC with (accuracy, area under
ROC curve) = (0.91, 0.92), and (0.92, 0.94) for the STARE and the DRIVE databases, respectively.

  • Content Type Journal Article
  • Pages 1-14
  • DOI 10.1007/s00500-010-0574-2
  • Authors
    • M. Ashraful Amin, Independent University Bangladesh School of Engineering and Computer Science Dhaka Bangladesh
    • Hong Yan, City University of Hong Kong Department of Electronic Engineering Kowloon Hong Kong

CHSMST: a clustering algorithm based on hyper surface and minimum spanning tree

Abstract  As data mining having attracted a significant amount of research attention, many clustering algorithms have been proposed
in the past decades. However, most of existing clustering methods have high computational time or are not sui…

Abstract  

As data mining having attracted a significant amount of research attention, many clustering algorithms have been proposed
in the past decades. However, most of existing clustering methods have high computational time or are not suitable for discovering
clusters with non-convex shape. In this paper, an efficient clustering algorithm CHSMST is proposed, which is based on clustering
based on hyper surface (CHS) and minimum spanning tree. In the first step, CHSMST applies CHS to obtain initial clusters immediately.
Thereafter, minimum spanning tree is introduced to handle locally dense data which is hard for CHS to deal with. The experiments
show that CHSMST can discover clusters with arbitrary shape. Moreover, CHSMST is insensitive to the order of input samples
and the run time of the algorithm increases moderately as the scale of dataset becomes large.

  • Content Type Journal Article
  • Pages 1-7
  • DOI 10.1007/s00500-010-0585-z
  • Authors
    • Qing He, Chinese Academy of Sciences The Key Laboratory of Intelligent Information Processing, Institute of Computing Technology 100190 Beijing China
    • Weizhong Zhao, Chinese Academy of Sciences The Key Laboratory of Intelligent Information Processing, Institute of Computing Technology 100190 Beijing China
    • Zhongzhi Shi, Chinese Academy of Sciences The Key Laboratory of Intelligent Information Processing, Institute of Computing Technology 100190 Beijing China

Learning and backtracking in non-preemptive scheduling of tasks under timing constraints

Abstract  We propose two novel heuristic search techniques to address the problem of scheduling tasks under hard timing constraints
on a single processor architecture. The underlying problem is NP-hard in the strong sense and it is a fundame…

Abstract  

We propose two novel heuristic search techniques to address the problem of scheduling tasks under hard timing constraints
on a single processor architecture. The underlying problem is NP-hard in the strong sense and it is a fundamental challenge
in feedback-control theory and automated cybernetics. The proposed techniques are a learning-based approaches and they take
much less memory space. A partial feasible schedule is maintained and extended over a repeated problem solving trials, previously
assigned priorities are refined according to the gained information about the problem to lead the convergence to a complete
feasible schedule if one exists. First, we present the learning in hard-real-time with single learning (LHRTS-SL) algorithm
where a single learning function is utilized, then we discuss its drawback and we propose the LHRTS with double learning algorithm
in which a second learning function is integrated to cope up with LHRTS-SL drawback. Experimental results show the efficiency
of the proposed techniques in terms of success ratio when used to schedule randomly generated problem instances.

  • Content Type Journal Article
  • Pages 1-16
  • DOI 10.1007/s00500-010-0582-2
  • Authors
    • Yacine Laalaoui, National Computer Science School 16309 Oued-Smar Algiers Algeria
    • Habiba Drias, National Computer Science School 16309 Oued-Smar Algiers Algeria

Commutative bounded integral residuated orthomodular lattices are Boolean algebras

Abstract  We show that a commutative bounded integral orthomodular lattice is residuated iff it is a Boolean algebra. This result is
a consequence of (Ward, Dilworth in Trans Am Math Soc 45, 336–354, 1939, Theorem 7.31); however, out …

Abstract  

We show that a commutative bounded integral orthomodular lattice is residuated iff it is a Boolean algebra. This result is
a consequence of (Ward, Dilworth in Trans Am Math Soc 45, 336–354, 1939, Theorem 7.31); however, out proof is independent and uses other instruments.

  • Content Type Journal Article
  • Pages 1-2
  • DOI 10.1007/s00500-010-0572-4
  • Authors
    • Josef Tkadlec, Czech Technical University 166 27 Praha Czech Republic
    • Esko Turunen, Tampere University of Technology P.O. Box 553 33101 Tampere Finland