136 resultados para search problems
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
CODEX SEARCH es un motor de recuperación de información especializado en derecho de extranjería que está basado en herramientas y conocimiento lingüísticos. Un motor o Sistema de Recuperación de Información (SRI) es un software capaz de localizar información en grandes colecciones documentales (entorno no trivial) en formato electrónico. Mediante un estudio previo se ha detectado que la extranjería es un ámbito discursivo en el que resulta difícil expresar la necesidad de información en términos de una consulta formal, objeto de los sistemas de recuperación actuales. Por lo tanto, para desarrollar un SRI eficiente en el dominio indicado no basta con emplear un modelo tradicional de RI, es decir, comparar los términos de la pregunta con los de la respuesta, básicamente porque no expresan implicaciones y porque no tiene que haber necesariamente una relación 1 a 1. En este sentido, la solución lingüística propuesta se basa en incorporar el conocimiento del especialista mediante la integración en el sistema de una librería de casos. Los casos son ejemplos de procedimientos aplicados por expertos a la solución de problemas que han ocurrido en la realidad y que han terminado en éxito o fracaso. Los resultados obtenidos en esta primera fase son muy alentadores pero es necesario continuar la investigación en este campo para mejorar el rendimiento del prototipo al que se puede acceder desde &http://161.116.36.139/~codex/&.
Resumo:
Approximate Quickselect, a simple modification of the well known Quickselect algorithm for selection, can be used to efficiently find an element with rank k in a given range [i..j], out of n given elements. We study basic cost measures of Approximate Quickselect by computing exact and asymptotic results for the expected number of passes, comparisons and data moves during the execution of this algorithm. The key element appearing in the analysis of Approximate Quickselect is a trivariate recurrence that we solve in full generality. The general solution of the recurrence proves to be very useful, as it allows us to tackle several related problems, besides the analysis that originally motivated us. In particular, we have been able to carry out a precise analysis of the expected number of moves of the ith element when selecting the jth smallest element with standard Quickselect, where we are able to give both exact and asymptotic results. Moreover, we can apply our general results to obtain exact and asymptotic results for several parameters in binary search trees, namely the expected number of common ancestors of the nodes with rank i and j, the expected size of the subtree rooted at the least common ancestor of the nodes with rank i and j, and the expected distance between the nodes of ranks i and j.
Resumo:
In this paper we present an algorithm to assign proctors toexams. This NP-hard problem is related to the generalized assignmentproblem with multiple objectives. The problem consists of assigningteaching assistants to proctor final exams at a university. We formulatethis problem as a multiobjective integer program (IP) with a preferencefunction and a workload-fairness function. We then consider also a weightedobjective that combines both functions. We develop a scatter searchprocedure and compare its outcome with solutions found by solving theIP model with CPLEX 6.5. Our test problems are real instances from aUniversity in Spain.
Resumo:
We propose a stylized model of a problem-solving organization whoseinternal communication structure is given by a fixed network. Problemsarrive randomly anywhere in this network and must find their way to theirrespective specialized solvers by relying on local information alone.The organization handles multiple problems simultaneously. For this reason,the process may be subject to congestion. We provide a characterization ofthe threshold of collapse of the network and of the stock of foatingproblems (or average delay) that prevails below that threshold. We buildupon this characterization to address a design problem: the determinationof what kind of network architecture optimizes performance for any givenproblem arrival rate. We conclude that, for low arrival rates, the optimalnetwork is very polarized (i.e. star-like or centralized ), whereas it islargely homogenous (or decentralized ) for high arrival rates. We also showthat, if an auxiliary assumption holds, the transition between these twoopposite structures is sharp and they are the only ones to ever qualify asoptimal.
Resumo:
The set covering problem is an NP-hard combinatorial optimization problemthat arises in applications ranging from crew scheduling in airlines todriver scheduling in public mass transport. In this paper we analyze searchspace characteristics of a widely used set of benchmark instances throughan analysis of the fitness-distance correlation. This analysis shows thatthere exist several classes of set covering instances that have a largelydifferent behavior. For instances with high fitness distance correlation,we propose new ways of generating core problems and analyze the performanceof algorithms exploiting these core problems.
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
Economies are open complex adaptive systems far from thermodynamic equilibrium, and neo-classical environmental economics seems not to be the best way to describe the behaviour of such systems. Standard econometric analysis (i.e. time series) takes a deterministic and predictive approach, which encourages the search for predictive policy to ‘correct’ environmental problems. Rather, it seems that, because of the characteristics of economic systems, an ex-post analysis is more appropriate, which describes the emergence of such systems’ properties, and which sees policy as a social steering mechanism. With this background, some of the recent empirical work published in the field of ecological economics that follows the approach defended here is presented. Finally, the conclusion is reached that a predictive use of econometrics (i.e. time series analysis) in ecological economics should be limited to cases in which uncertainty decreases, which is not the normal situation when analysing the evolution of economic systems. However, that does not mean we should not use empirical analysis. On the contrary, this is to be encouraged, but from a structural and ex-post point of view.
Resumo:
We provide some guidelines for deriving new projective hash families of cryptographic interest. Our main building blocks are so called group action systems; we explore what properties of this mathematical primitives may lead to the construction of cryptographically useful projective hash families. We point out different directions towards new constructions, deviating from known proposals arising from Cramer and Shoup's seminal work.
Resumo:
We study a simple model of assigning indivisible objects (e.g., houses, jobs, offices, etc.) to agents. Each agent receives at most one object and monetary compensations are not possible. We completely describe all rules satisfying efficiency and resource-monotonicity. The characterized rules assign the objects in a sequence of steps such that at each step there is either a dictator or two agents "trade" objects from their hierarchically specified "endowments."
Resumo:
We accomplish two goals. First, we provide a non-cooperative foundation for the use of the Nash bargaining solution in search markets. This finding should help to close the rift between the search and the matching-and-bargaining literature. Second, we establish that the diversity of quality offered (at an increasing price-quality ratio) in a decentralized market is an equilibrium phenomenon - even in the limit as search frictions disappear.