20 resultados para Lipschitzian bounds

em Boston University Digital Common


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We prove several new lower bounds for constant depth quantum circuits. The main result is that parity (and hence fanout) requires log depth circuits, when the circuits are composed of single qubit and arbitrary size Toffoli gates, and when they use only constantly many ancillae. Under this constraint, this bound is close to optimal. In the case of a non-constant number of ancillae, we give a tradeoff between the number of ancillae and the required depth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that if a language is recognized within certain error bounds by constant-depth quantum circuits over a finite family of gates, then it is computable in (classical) polynomial time. In particular, our results imply EQNC^0 ⊆ P, where EQNC^0 is the constant-depth analog of the class EQP. On the other hand, we adapt and extend ideas of Terhal and DiVincenzo [?] to show that, for any family

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A well-known paradigm for load balancing in distributed systems is the``power of two choices,''whereby an item is stored at the less loaded of two (or more) random alternative servers. We investigate the power of two choices in natural settings for distributed computing where items and servers reside in a geometric space and each item is associated with the server that is its nearest neighbor. This is in fact the backdrop for distributed hash tables such as Chord, where the geometric space is determined by clockwise distance on a one-dimensional ring. Theoretically, we consider the following load balancing problem. Suppose that servers are initially hashed uniformly at random to points in the space. Sequentially, each item then considers d candidate insertion points also chosen uniformly at random from the space,and selects the insertion point whose associated server has the least load. For the one-dimensional ring, and for Euclidean distance on the two-dimensional torus, we demonstrate that when n data items are hashed to n servers,the maximum load at any server is log log n / log d + O(1) with high probability. While our results match the well-known bounds in the standard setting in which each server is selected equiprobably, our applications do not have this feature, since the sizes of the nearest-neighbor regions around servers are non-uniform. Therefore, the novelty in our methods lies in developing appropriate tail bounds on the distribution of nearest-neighbor region sizes and in adapting previous arguments to this more general setting. In addition, we provide simulation results demonstrating the load balance that results as the system size scales into the millions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes an algorithm for scheduling packets in real-time multimedia data streams. Common to these classes of data streams are service constraints in terms of bandwidth and delay. However, it is typical for real-time multimedia streams to tolerate bounded delay variations and, in some cases, finite losses of packets. We have therefore developed a scheduling algorithm that assumes streams have window-constraints on groups of consecutive packet deadlines. A window-constraint defines the number of packet deadlines that can be missed in a window of deadlines for consecutive packets in a stream. Our algorithm, called Dynamic Window-Constrained Scheduling (DWCS), attempts to guarantee no more than x out of a window of y deadlines are missed for consecutive packets in real-time and multimedia streams. Using DWCS, the delay of service to real-time streams is bounded even when the scheduler is overloaded. Moreover, DWCS is capable of ensuring independent delay bounds on streams, while at the same time guaranteeing minimum bandwidth utilizations over tunable and finite windows of time. We show the conditions under which the total demand for link bandwidth by a set of real-time (i.e., window-constrained) streams can exceed 100% and still ensure all window-constraints are met. In fact, we show how it is possible to guarantee worst-case per-stream bandwidth and delay constraints while utilizing all available link capacity. Finally, we show how best-effort packets can be serviced with fast response time, in the presence of window-constrained traffic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new approach to window-constrained scheduling, suitable for multimedia and weakly-hard real-time systems. We originally developed an algorithm, called Dynamic Window-Constrained Scheduling (DWCS), that attempts to guarantee no more than x out of y deadlines are missed for real-time jobs such as periodic CPU tasks, or delay-constrained packet streams. While DWCS is capable of generating a feasible window-constrained schedule that utilizes 100% of resources, it requires all jobs to have the same request periods (or intervals between successive service requests). We describe a new algorithm called Virtual Deadline Scheduling (VDS), that provides window-constrained service guarantees to jobs with potentially different request periods, while still maximizing resource utilization. VDS attempts to service m out of k job instances by their virtual deadlines, that may be some finite time after the corresponding real-time deadlines. Notwithstanding, VDS is capable of outperforming DWCS and similar algorithms, when servicing jobs with potentially different request periods. Additionally, VDS is able to limit the extent to which a fraction of all job instances are serviced late. Results from simulations show that VDS can provide better window-constrained service guarantees than other related algorithms, while still having as good or better delay bounds for all scheduled jobs. Finally, an implementation of VDS in the Linux kernel compares favorably against DWCS for a range of scheduling loads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we discuss a new type of query in Spatial Databases, called Trip Planning Query (TPQ). Given a set of points P in space, where each point belongs to a category, and given two points s and e, TPQ asks for the best trip that starts at s, passes through exactly one point from each category, and ends at e. An example of a TPQ is when a user wants to visit a set of different places and at the same time minimize the total travelling cost, e.g. what is the shortest travelling plan for me to visit an automobile shop, a CVS pharmacy outlet, and a Best Buy shop along my trip from A to B? The trip planning query is an extension of the well-known TSP problem and therefore is NP-hard. The difficulty of this query lies in the existence of multiple choices for each category. In this paper, we first study fast approximation algorithms for the trip planning query in a metric space, assuming that the data set fits in main memory, and give the theory analysis of their approximation bounds. Then, the trip planning query is examined for data sets that do not fit in main memory and must be stored on disk. For the disk-resident data, we consider two cases. In one case, we assume that the points are located in Euclidean space and indexed with an Rtree. In the other case, we consider the problem of points that lie on the edges of a spatial network (e.g. road network) and the distance between two points is defined using the shortest distance over the network. Finally, we give an experimental evaluation of the proposed algorithms using synthetic data sets generated on real road networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We demonstrate that if two probability distributions D and E of sufficiently small min-entropy have statistical difference ε, then the direct-product distributions D^l and E^l have statistical difference at least roughly ε\s√l, provided that l is sufficiently small, smaller than roughly ε^{4/3}. Previously known bounds did not work for few repetitions l, requiring l>ε^2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of delivering popular streaming media to a large number of asynchronous clients. We propose and evaluate a cache-and-relay end-system multicast approach, whereby a client joining a multicast session caches the stream, and if needed, relays that stream to neighboring clients which may join the multicast session at some later time. This cache-and-relay approach is fully distributed, scalable, and efficient in terms of network link cost. In this paper we analytically derive bounds on the network link cost of our cache-and-relay approach, and we evaluate its performance under assumptions of limited client bandwidth and limited client cache capacity. When client bandwidth is limited, we show that although finding an optimal solution is NP-hard, a simple greedy algorithm performs surprisingly well in that it incurs network link costs that are very close to a theoretical lower bound. When client cache capacity is limited, we show that our cache-and-relay approach can still significantly reduce network link cost. We have evaluated our cache-and-relay approach using simulations over large, synthetic random networks, power-law degree networks, and small-world networks, as well as over large real router-level Internet maps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, slotted communication protocols have employed guard times to delineate and align slots. These guard times may expand the slot duration significantly, especially when clocks are allowed to drift for longer time to reduce clock synchronization overhead. Recently, a new class of lightweight protocols for statistical estimation in wireless sensor networks have been proposed. This new class requires very short transmission durations (jam signals), thus the traditional approach of using guard times would impose significant overhead. We propose a new, more efficient algorithm to align slots. Based on geometrical properties of space, we prove that our approach bounds the slot duration by only a constant factor of what is needed. Furthermore, we show by simulation that this bound is loose and an even smaller slot duration is required, making our approach even more efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a technique to derive depth lower bounds for quantum circuits. The technique is based on the observation that in circuits without ancillae, only a few input states can set all the control qubits of a Toffoli gate to 1. This can be used to selectively remove large Toffoli gates from a quantum circuit while keeping the cumulative error low. We use the technique to give another proof that parity cannot be computed by constant depth quantum circuits without ancillæ.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes the use of in-network caches (which we call Angels) to reduce the Minimum Distribution Time (MDT) of a file from a seeder – a node that possesses the file – to a set of leechers – nodes who are interested in downloading the file. An Angel is not a leecher in the sense that it is not interested in receiving the entire file, but rather it is interested in minimizing the MDT to all leechers, and as such uses its storage and up/down-link capacity to cache and forward parts of the file to other peers. We extend the analytical results by Kumar and Ross [1] to account for the presence of angels by deriving a new lower bound for the MDT. We show that this newly derived lower bound is tight by proposing a distribution strategy under assumptions of a fluid model. We present a GroupTree heuristic that addresses the impracticalities of the fluid model. We evaluate our designs through simulations that show that our Group-Tree heuristic outperforms other heuristics, that it scales well with the increase of the number of leechers, and that it closely approaches the optimal theoretical bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a fault model of Boolean gates, both classical and quantum, where some of the inputs may not be connected to the actual gate hardware. This model is somewhat similar to the stuck-at model which is a very popular model in testing Boolean circuits. We consider the problem of detecting such faults; the detection algorithm can query the faulty gate and its complexity is the number of such queries. This problem is related to determining the sensitivity of Boolean functions. We show how quantum parallelism can be used to detect such faults. Specifically, we show that a quantum algorithm can detect such faults more efficiently than a classical algorithm for a Parity gate and an AND gate. We give explicit constructions of quantum detector algorithms and show lower bounds for classical algorithms. We show that the model for detecting such faults is similar to algebraic decision trees and extend some known results from quantum query complexity to prove some of our results.