947 resultados para Infeasible solution space search


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Capacitated Arc Routing Problem (CARP) is a well-known NP-hard combinatorial optimization problem where, given an undirected graph, the objective is to find a minimum cost set of tours servicing a subset of required edges under vehicle capacity constraints. There are numerous applications for the CARP, such as street sweeping, garbage collection, mail delivery, school bus routing, and meter reading. A Greedy Randomized Adaptive Search Procedure (GRASP) with Path-Relinking (PR) is proposed and compared with other successful CARP metaheuristics. Some features of this GRASP with PR are (i) reactive parameter tuning, where the parameter value is stochastically selected biased in favor of those values which historically produced the best solutions in average; (ii) a statistical filter, which discard initial solutions if they are unlikely to improve the incumbent best solution; (iii) infeasible local search, where high-quality solutions, though infeasible, are used to explore the feasible/infeasible boundaries of the solution space; (iv) evolutionary PR, a recent trend where the pool of elite solutions is progressively improved by successive relinking of pairs of elite solutions. Computational tests were conducted using a set of 81 instances, and results reveal that the GRASP is very competitive, achieving the best overall deviation from lower bounds and the highest number of best solutions found. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a generalisation of the k-nearest neighbour (k-NN) retrieval method based on an error function using distance metrics in the solution and problem space. It is an interpolative method which is proposed to be effective for sparse case bases. The method applies equally to nominal, continuous and mixed domains, and does not depend upon an embedding n-dimensional space. In continuous Euclidean problem domains, the method is shown to be a generalisation of the Shepard's Interpolation method. We term the retrieval algorithm the Generalised Shepard Nearest Neighbour (GSNN) method. A novel aspect of GSNN is that it provides a general method for interpolation over nominal solution domains. The performance of the retrieval method is examined with reference to the Iris classification problem,and to a simulated sparse nominal value test problem. The introducion of a solution-space metric is shown to out-perform conventional nearest neighbours methods on sparse case bases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a case base reduction technique which uses a metric defined on the solution space. The technique utilises the Generalised Shepard Nearest Neighbour (GSNN) algorithm to estimate nominal or real valued solutions in case bases with solution space metrics. An overview of GSNN and a generalised reduction technique, which subsumes some existing decremental methods, such as the Shrink algorithm, are presented. The reduction technique is given for case bases in terms of a measure of the importance of each case to the predictive power of the case base. A trial test is performed on two case bases of different kinds, with several metrics proposed in the solution space. The tests show that GSNN can out-perform standard nearest neighbour methods on this set. Further test results show that a caseremoval order proposed based on a GSNN error function can produce a sparse case base with good predictive power.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous papers have noted the difficulty in obtaining neural models which are stable under simulation when trained using prediction-error-based methods. Here the differences between series-parallel and parallel identification structures for training neural models are investigated. The effect of the error surface shape on training convergence and simulation performance is analysed using a standard algorithm operating in both training modes. A combined series-parallel/parallel training scheme is proposed, aiming to provide a more effective means of obtaining accurate neural simulation models. Simulation examples show the combined scheme is advantageous in circumstances where the solution space is known or suspected to be complex. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document contains a report on the work done under the ESA/Ariadna study 06/4101 on the global optimization of space trajectories with multiple gravity assist (GA) and deep space manoeuvres (DSM). The study was performed by a joint team of scientists from the University of Reading and the University of Glasgow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese dout., Matemática, Investigação Operacional, Universidade do Algarve, 2009

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose an algorithm for an upgrading arc median shortest path problem for a transportation network. The problem is to identify a set of nondominated paths that minimizes both upgrading cost and overall travel time of the entire network. These two objectives are realistic for transportation network problems, but of a conflicting and noncompensatory nature. In addition, unlike upgrading cost which is the sum of the arc costs on the path, overall travel time of the entire network cannot be expressed as a sum of arc travel times on the path. The proposed solution approach to the problem is based on heuristic labeling and exhaustive search techniques, in criteria space and solution space, respectively. The first approach labels each node in terms of upgrading cost, and deletes cyclic and infeasible paths in criteria space. The latter calculates the overall travel time of the entire network for each feasible path, deletes dominated paths on the basis of the objective vector and identifies a set of Pareto optimal paths in the solution space. The computational study, using two small-scale transportation networks, has demonstrated that the algorithm proposed herein is able to efficiently identify a set of nondominated median shortest paths, based on two conflicting and noncompensatory objectives.