A Novel Particle Swarm Optimization Approach for Patient Clustering From Emergency Departments

In this paper, a novel particle swarm optimization (PSO) algorithm is proposed in order to improve the accuracy of traditional clustering approaches with applications in analyzing real-time patient attendance data from an accident & emergency (A&E) department in a local U.K. hospital. In the proposed randomly occurring distributedly delayed PSO (RODDPSO) algorithm, the evolutionary state is determined by evaluating the evolutionary factor in each iteration, based on whether the velocity updating model switches from one mode to another. With the purpose of reducing the possibility of getting trapped in the local optima and also expanding the search space, randomly occurring time-delays that reflect the history of previous personal best and global best particles are introduced in the velocity updating model in a distributed manner. Eight well-known benchmark functions are employed to evaluate the proposed RODDPSO algorithm which is shown via extensive comparisons to outperform some currently popular PSO algorithms. To further illustrate the application potential, the RODDPSO algorithm is successfully exploited in the patient clustering problem for data analysis with respect to a local A&E department in West London. Experiment results demonstrate that the RODDPSO-based clustering method is superior over two other well-known clustering algorithms.

In this paper, a novel particle swarm optimization (PSO) algorithm is proposed in order to improve the accuracy of traditional clustering approaches with applications in analyzing real-time patient attendance data from an accident & emergency (A&E) department in a local U.K. hospital. In the proposed randomly occurring distributedly delayed PSO (RODDPSO) algorithm, the evolutionary state is determined by evaluating the evolutionary factor in each iteration, based on whether the velocity updating model switches from one mode to another. With the purpose of reducing the possibility of getting trapped in the local optima and also expanding the search space, randomly occurring time-delays that reflect the history of previous personal best and global best particles are introduced in the velocity updating model in a distributed manner. Eight well-known benchmark functions are employed to evaluate the proposed RODDPSO algorithm which is shown via extensive comparisons to outperform some currently popular PSO algorithms. To further illustrate the application potential, the RODDPSO algorithm is successfully exploited in the patient clustering problem for data analysis with respect to a local A&E department in West London. Experiment results demonstrate that the RODDPSO-based clustering method is superior over two other well-known clustering algorithms.

ACO-A*: Ant Colony Optimization Plus A* for 3-D Traveling in Environments With Dense Obstacles

Path planning is one of the most important problems in the development of autonomous underwater vehicles (AUVs). In some common AUV missions, e.g., wreckage search for rescue, an AUV is often required to traverse multiple targets in a complex environment with dense obstacles. In such case, the AUV path planning problem becomes even more challenging. In order to address the problem, this paper develops a two-layer algorithm, namely ACO-A*, by combining the ant colony optimization (ACO) with the A* search. Once a mission with a set of arbitrary targets is assigned, ACO is responsible to determine the traveling order of targets. But, prior to ACO, a cost graph indicating the necessary traveling costs among targets must be quickly established to facilitate traveling order evaluation. For this purpose, a coarse-grained modeling with a representative-based estimation (RBE) strategy is proposed. Following the order obtained by ACO, targets will be traversed one by one and the pairwise path planning to reach each target can be performed during vehicle driving. To deal with the dense obstacles, A* is adopted to plan paths based on a fine-grained modeling and an admissible heuristic function is designed for A* to guarantee its optimality. Experiments on both synthetic and realistic scenarios have been designed to validate the efficiency of the proposed ACO-A*, as well as the effectiveness of RBE and the necessity of A*.

Path planning is one of the most important problems in the development of autonomous underwater vehicles (AUVs). In some common AUV missions, e.g., wreckage search for rescue, an AUV is often required to traverse multiple targets in a complex environment with dense obstacles. In such case, the AUV path planning problem becomes even more challenging. In order to address the problem, this paper develops a two-layer algorithm, namely ACO-A*, by combining the ant colony optimization (ACO) with the A* search. Once a mission with a set of arbitrary targets is assigned, ACO is responsible to determine the traveling order of targets. But, prior to ACO, a cost graph indicating the necessary traveling costs among targets must be quickly established to facilitate traveling order evaluation. For this purpose, a coarse-grained modeling with a representative-based estimation (RBE) strategy is proposed. Following the order obtained by ACO, targets will be traversed one by one and the pairwise path planning to reach each target can be performed during vehicle driving. To deal with the dense obstacles, A* is adopted to plan paths based on a fine-grained modeling and an admissible heuristic function is designed for A* to guarantee its optimality. Experiments on both synthetic and realistic scenarios have been designed to validate the efficiency of the proposed ACO-A*, as well as the effectiveness of RBE and the necessity of A*.

Structural Risk Minimization-Driven Genetic Programming for Enhancing Generalization in Symbolic Regression

Generalization ability, which reflects the prediction ability of a learned model, is an important property in genetic programming (GP) for symbolic regression. Structural risk minimization (SRM) is a framework providing a reliable estimation of the generalization performance of prediction models. Introducing the framework into GP has the potential to drive the evolutionary process toward models with good generalization performance. However, this is tough due to the difficulty in obtaining the Vapnik–Chervonenkis (VC) dimension of nonlinear models. To address this difficulty, this paper proposes an SRM-driven GP approach, which uses an experimental method (instead of theoretical estimation) to measure the VC dimension of a mixture of linear and nonlinear regression models for the first time. The experimental method has been conducted using uniform and nonuniform settings. The results show that our method has impressive generalization gains over standard GP and GP with the 0.632 bootstrap, and that the proposed method using the nonuniform setting has further improvement than its counterpart using the uniform setting. Further analyzes reveal that the proposed method can evolve more compact models, and that the behavioral difference between these compact models and the target models is much smaller than their counterparts evolved by the other GP methods.

Generalization ability, which reflects the prediction ability of a learned model, is an important property in genetic programming (GP) for symbolic regression. Structural risk minimization (SRM) is a framework providing a reliable estimation of the generalization performance of prediction models. Introducing the framework into GP has the potential to drive the evolutionary process toward models with good generalization performance. However, this is tough due to the difficulty in obtaining the Vapnik–Chervonenkis (VC) dimension of nonlinear models. To address this difficulty, this paper proposes an SRM-driven GP approach, which uses an experimental method (instead of theoretical estimation) to measure the VC dimension of a mixture of linear and nonlinear regression models for the first time. The experimental method has been conducted using uniform and nonuniform settings. The results show that our method has impressive generalization gains over standard GP and GP with the 0.632 bootstrap, and that the proposed method using the nonuniform setting has further improvement than its counterpart using the uniform setting. Further analyzes reveal that the proposed method can evolve more compact models, and that the behavioral difference between these compact models and the target models is much smaller than their counterparts evolved by the other GP methods.

Quality Diversity Through Surprise

Quality diversity (QD) is a recent family of evolutionary search algorithms which focus on finding several well-performing (quality) yet different (diversity) solutions with the aim to maintain an appropriate balance between divergence and convergence …

Quality diversity (QD) is a recent family of evolutionary search algorithms which focus on finding several well-performing (quality) yet different (diversity) solutions with the aim to maintain an appropriate balance between divergence and convergence during search. While QD has already delivered promising results in complex problems, the capacity of divergent search variants for QD remains largely unexplored. Inspired by the notion of surprise as an effective driver of divergent search and its orthogonal nature to novelty this paper investigates the impact of the former to QD performance. For that purpose we introduce three new QD algorithms which employ surprise as a diversity measure, either on its own or combined with novelty, and compare their performance against novelty search with local competition, the state of the art QD algorithm. The algorithms are tested in a robot navigation task across 60 highly deceptive mazes. Our findings suggest that allowing surprise and novelty to operate synergistically for divergence and in combination with local competition leads to QD algorithms of significantly higher efficiency, speed, and robustness.

A Bio-Inspired Self-Learning Coevolutionary Dynamic Multiobjective Optimization Algorithm for Internet of Things Services

The ultimate goal of the Internet of Things (IoT) is to provide ubiquitous services. To achieve this goal, many challenges remain to be addressed. Inspired from the cooperative mechanisms between multiple systems in the human being, this paper proposes…

The ultimate goal of the Internet of Things (IoT) is to provide ubiquitous services. To achieve this goal, many challenges remain to be addressed. Inspired from the cooperative mechanisms between multiple systems in the human being, this paper proposes a bio-inspired self-learning coevolutionary algorithm (BSCA) for dynamic multiobjective optimization of IoT services to reduce energy consumption and service time. BSCA consists of three layers. The first layer is composed of multiple subpopulations evolving cooperatively to obtain diverse Pareto fronts. Based on the solutions obtained by the first layer, the second layer aims to further increase the diversity of solutions. The third layer refines the solutions found in the second layer by adopting an adaptive gradient refinement search strategy and a dynamic optimization method to cope with changing concurrent multiple service requests, thereby effectively improving the accuracy of solutions. Experiments on agricultural IoT services in the presence of dynamic requests under different distributions are performed based on two service-providing strategies, i.e., single service and collaborative service. The simulation results demonstrate that BSCA performs better than four existing algorithms on IoT services, in particular for high-dimensional problems.

Coevolutionary Particle Swarm Optimization With Bottleneck Objective Learning Strategy for Many-Objective Optimization

The application of multiobjective evolutionary algorithms to many-objective optimization problems often faces challenges in terms of diversity and convergence. On the one hand, with a limited population size, it is difficult for an algorithm to cover different parts of the whole Pareto front (PF) in a large objective space. The algorithm tends to concentrate only on limited areas. On the other hand, as the number of objectives increases, solutions easily have poor values on some objectives, which can be regarded as poor bottleneck objectives that restrict solutions’ convergence to the PF. Thus, we propose a coevolutionary particle swarm optimization with a bottleneck objective learning (BOL) strategy for many-objective optimization. In the proposed algorithm, multiple swarms coevolve in distributed fashion to maintain diversity for approximating different parts of the whole PF, and a novel BOL strategy is developed to improve convergence on all objectives. In addition, we develop a solution reproduction procedure with both an elitist learning strategy (ELS) and a juncture learning strategy (JLS) to improve the quality of archived solutions. The ELS helps the algorithm to jump out of local PFs, and the JLS helps to reach out to the missing areas of the PF that are easily missed by the swarms. The performance of the proposed algorithm is evaluated using two widely used test suites with different numbers of objectives. Experimental results show that the proposed algorithm compares favorably with six other state-of-the-art algorithms on many-objective optimization.

The application of multiobjective evolutionary algorithms to many-objective optimization problems often faces challenges in terms of diversity and convergence. On the one hand, with a limited population size, it is difficult for an algorithm to cover different parts of the whole Pareto front (PF) in a large objective space. The algorithm tends to concentrate only on limited areas. On the other hand, as the number of objectives increases, solutions easily have poor values on some objectives, which can be regarded as poor bottleneck objectives that restrict solutions’ convergence to the PF. Thus, we propose a coevolutionary particle swarm optimization with a bottleneck objective learning (BOL) strategy for many-objective optimization. In the proposed algorithm, multiple swarms coevolve in distributed fashion to maintain diversity for approximating different parts of the whole PF, and a novel BOL strategy is developed to improve convergence on all objectives. In addition, we develop a solution reproduction procedure with both an elitist learning strategy (ELS) and a juncture learning strategy (JLS) to improve the quality of archived solutions. The ELS helps the algorithm to jump out of local PFs, and the JLS helps to reach out to the missing areas of the PF that are easily missed by the swarms. The performance of the proposed algorithm is evaluated using two widely used test suites with different numbers of objectives. Experimental results show that the proposed algorithm compares favorably with six other state-of-the-art algorithms on many-objective optimization.

An Effective Ensemble Framework for Multiobjective Optimization

This paper proposes an effective ensemble framework (EF) for tackling multiobjective optimization problems, by combining the advantages of various evolutionary operators and selection criteria that are run on multiple populations. A simple ensemble algorithm is realized as a prototype to demonstrate our proposed framework. Two mechanisms, namely competition and cooperation, are employed to drive the running of the ensembles. Competition is designed by adaptively running different evolutionary operators on multiple populations. The operator that better fits the problem’s characteristics will receive more computational resources, being rewarded by a decomposition-based credit assignment strategy. Cooperation is achieved by a cooperative selection of the offspring generated by different populations. In this way, the promising offspring from one population have chances to migrate into the other populations to enhance their convergence or diversity. Moreover, the population update information is further exploited to build an evolutionary potentiality model, which is used to guide the evolutionary process. Our experimental results show the superior performance of our proposed ensemble algorithms in solving most cases of a set of 31 test problems, which corroborates the advantages of our EF.

This paper proposes an effective ensemble framework (EF) for tackling multiobjective optimization problems, by combining the advantages of various evolutionary operators and selection criteria that are run on multiple populations. A simple ensemble algorithm is realized as a prototype to demonstrate our proposed framework. Two mechanisms, namely competition and cooperation, are employed to drive the running of the ensembles. Competition is designed by adaptively running different evolutionary operators on multiple populations. The operator that better fits the problem’s characteristics will receive more computational resources, being rewarded by a decomposition-based credit assignment strategy. Cooperation is achieved by a cooperative selection of the offspring generated by different populations. In this way, the promising offspring from one population have chances to migrate into the other populations to enhance their convergence or diversity. Moreover, the population update information is further exploited to build an evolutionary potentiality model, which is used to guide the evolutionary process. Our experimental results show the superior performance of our proposed ensemble algorithms in solving most cases of a set of 31 test problems, which corroborates the advantages of our EF.