920 resultados para Query Complexity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Ulam’s problem is a two person game in which one of the player tries to search, in minimum queries, a number thought by the other player. Classically the problem scales polynomially with the size of the number. The quantum version of the Ulam’s problem has a query complexity that is independent of the dimension of the search space. The experimental implementation of the quantum Ulam’s problem in a Nuclear Magnetic Resonance Information Processor with 3 quantum bits is reported here.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider a fault model of Boolean gates, both classical and quantum, where some of the inputs may not be connected to the actual gate hardware. This model is somewhat similar to the stuck-at model which is a very popular model in testing Boolean circuits. We consider the problem of detecting such faults; the detection algorithm can query the faulty gate and its complexity is the number of such queries. This problem is related to determining the sensitivity of Boolean functions. We show how quantum parallelism can be used to detect such faults. Specifically, we show that a quantum algorithm can detect such faults more efficiently than a classical algorithm for a Parity gate and an AND gate. We give explicit constructions of quantum detector algorithms and show lower bounds for classical algorithms. We show that the model for detecting such faults is similar to algebraic decision trees and extend some known results from quantum query complexity to prove some of our results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ce mémoire étudie l'algorithme d'amplification de l'amplitude et ses applications dans le domaine de test de propriété. On utilise l'amplification de l'amplitude pour proposer le plus efficace algorithme quantique à ce jour qui teste la linéarité de fonctions booléennes et on généralise notre nouvel algorithme pour tester si une fonction entre deux groupes abéliens finis est un homomorphisme. Le meilleur algorithme quantique connu qui teste la symétrie de fonctions booléennes est aussi amélioré et l'on utilise ce nouvel algorithme pour tester la quasi-symétrie de fonctions booléennes. Par la suite, on approfondit l'étude du nombre de requêtes à la boîte noire que fait l'algorithme d'amplification de l'amplitude pour amplitude initiale inconnue. Une description rigoureuse de la variable aléatoire représentant ce nombre est présentée, suivie du résultat précédemment connue de la borne supérieure sur l'espérance. Suivent de nouveaux résultats sur la variance de cette variable. Il est notamment montré que, dans le cas général, la variance est infinie, mais nous montrons aussi que, pour un choix approprié de paramètres, elle devient bornée supérieurement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Key agreement is a cryptographic scenario between two legitimate parties, who need to establish a common secret key over a public authenticated channel, and an eavesdropper who intercepts all their messages in order to learn the secret. We consider query complexity in which we count only the number of evaluations (queries) of a given black-box function, and classical communication channels. Ralph Merkle provided the first unclassified scheme for secure communications over insecure channels. When legitimate parties are willing to ask O(N) queries for some parameter N, any classical eavesdropper needs Omega(N^2) queries before being able to learn their secret, which is is optimal. However, a quantum eavesdropper can break this scheme in O(N) queries. Furthermore, it was conjectured that any scheme, in which legitimate parties are classical, could be broken in O(N) quantum queries. In this thesis, we introduce protocols à la Merkle that fall into two categories. When legitimate parties are restricted to use classical computers, we offer the first secure classical scheme. It requires Omega(N^{13/12}) queries of a quantum eavesdropper to learn the secret. We give another protocol having security of Omega(N^{7/6}) queries. Furthermore, for any k>= 2, we introduce a classical protocol in which legitimate parties establish a secret in O(N) queries while the optimal quantum eavesdropping strategy requires Theta(N^{1/2+k/{k+1}}) queries, approaching Theta(N^{3/2}) when k increases. When legitimate parties are provided with quantum computers, we present two quantum protocols improving on the best known scheme before this work. Furthermore, for any k>= 2, we give a quantum protocol in which legitimate parties establish a secret in O(N) queries while the optimal quantum eavesdropping strategy requires Theta(N^{1+{k}/{k+1}})} queries, approaching Theta(N^{2}) when k increases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

SQL (Structured Query Language) is one of the essential topics in foundation databases courses in higher education. Due to its apparent simple syntax, learning to use the full power of SQL can be a very difficult activity. In this paper, we introduce SQLator, which is a web-based interactive tool for learning SQL. SQLator's key function is the evaluate function, which allows a user to evaluate the correctness of his/her query formulation. The evaluate engine is based on complex heuristic algorithms. The tool also provides instructors the facility to create and populate database schemas with an associated pool of SQL queries. Currently it hosts two databases with a query pool of 300+ across the two databases. The pool is divided into 3 categories according to query complexity. The SQLator user can perform unlimited executions and evaluations on query formulations and/or view the solutions. The SQLator evaluate function has a high rate of success in evaluating the user's statement as correct (or incorrect) corresponding to the question. We will present in this paper, the basic architecture and functions of SQLator. We will further discuss the value of SQLator as an educational technology and report on educational outcomes based on studies conducted at the School of Information Technology and Electrical Engineering, The University of Queensland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Success of query reformulation and relevant information retrieval depends on many factors, such as users’ prior knowledge, age, gender, and cognitive styles. One of the important factors that affect a user’s query reformulation behaviour is that of the nature of the search tasks. Limited studies have examined the impact of the search task types on query reformulation behaviour while performing Web searches. This paper examines how the nature of the search tasks affects users’ query reformulation behaviour during information searching. The paper reports empirical results from a user study in which 50 participants performed a set of three Web search tasks – exploratory, factorial and abstract. Users’ interactions with search engines were logged by using a monitoring program. 872 unique search queries were classified into five query types – New, Add, Remove, Replace and Repeat. Users submitted fewer queries for the factual task, which accounted for 26%. They completed a higher number of queries (40% of the total queries) while carrying out the exploratory task. A one-way MANOVA test indicated a significant effect of search task types on users’ query reformulation behaviour. In particular, the search task types influenced the manner in which users reformulated the New and Repeat queries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Query processing over the Internet involving autonomous data sources is a major task in data integration. It requires the estimated costs of possible queries in order to select the best one that has the minimum cost. In this context, the cost of a query is affected by three factors: network congestion, server contention state, and complexity of the query. In this paper, we study the effects of both the network congestion and server contention state on the cost of a query. We refer to these two factors together as system contention states. We present a new approach to determining the system contention states by clustering the costs of a sample query. For each system contention state, we construct two cost formulas for unary and join queries respectively using the multiple regression process. When a new query is submitted, its system contention state is estimated first using either the time slides method or the statistical method. The cost of the query is then calculated using the corresponding cost formulas. The estimated cost of the query is further adjusted to improve its accuracy. Our experiments show that our methods can produce quite accurate cost estimates of the submitted queries to remote data sources over the Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As an important type of spatial keyword query, the m-closest keywords (mCK) query finds a group of objects such that they cover all query keywords and have the smallest diameter, which is defined as the largest distance between any pair of objects in the group. The query is useful in many applications such as detecting locations of web resources. However, the existing work does not study the intractability of this problem and only provides exact algorithms, which are computationally expensive.

In this paper, we prove that the problem of answering mCK queries is NP-hard. We first devise a greedy algorithm that has an approximation ratio of 2. Then, we observe that an mCK query can be approximately answered by finding the circle with the smallest diameter that encloses a group of objects together covering all query keywords. We prove that the group enclosed in the circle can answer the mCK query with an approximation ratio of 2 over 3. Based on this, we develop an algorithm for finding such a circle exactly, which has a high time complexity. To improve efficiency, we propose another two algorithms that find such a circle approximately, with a ratio of 2 over √3 + ε. Finally, we propose an exact algorithm that utilizes the group found by the 2 over √3 + ε)-approximation algorithm to obtain the optimal group. We conduct extensive experiments using real-life datasets. The experimental results offer insights into both efficiency and accuracy of the proposed approximation algorithms, and the results also demonstrate that our exact algorithm outperforms the best known algorithm by an order of magnitude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial data are particularly useful in mobile environments. However, due to the low bandwidth of most wireless networks, developing large spatial database applications becomes a challenging process. In this paper, we provide the first attempt to combine two important techniques, multiresolution spatial data structure and semantic caching, towards efficient spatial query processing in mobile environments. Based on the study of the characteristics of multiresolution spatial data (MSD) and multiresolution spatial query, we propose a new semantic caching model called Multiresolution Semantic Caching (MSC) for caching MSD in mobile environments. MSC enriches the traditional three-category query processing in semantic cache to five categories, thus improving the performance in three ways: 1) a reduction in the amount and complexity of the remainder queries; 2) the redundant transmission of spatial data already residing in a cache is avoided; 3) a provision for satisfactory answers before 100% query results have been transmitted to the client side. Our extensive experiments on a very large and complex real spatial database show that MSC outperforms the traditional semantic caching models significantly

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the question of how to communicate among distributed processes valuessuch as real numbers, continuous functions and geometrical solids with arbitrary precision, yet efficiently. We extend the established concept of lazy communication using streams of approximants by introducing explicit queries. We formalise this approach using protocols of a query-answer nature. Such protocols enable processes to provide valid approximations with certain accuracy and focusing on certain locality as demanded by the receiving processes through queries. A lattice-theoretic denotational semantics of channel and process behaviour is developed. Thequery space is modelled as a continuous lattice in which the top element denotes the query demanding all the information, whereas other elements denote queries demanding partial and/or local information. Answers are interpreted as elements of lattices constructed over suitable domains of approximations to the exact objects. An unanswered query is treated as an error anddenoted using the top element. The major novel characteristic of our semantic model is that it reflects the dependency of answerson queries. This enables the definition and analysis of an appropriate concept of convergence rate, by assigning an effort indicator to each query and a measure of information content to eachanswer. Thus we capture not only what function a process computes, but also how a process transforms the convergence rates from its inputs to its outputs. In future work these indicatorscan be used to capture further computational complexity measures. A robust prototype implementation of our model is available.