866 resultados para Lower Bounds
Resumo:
We propose a new way to build a combined list from K base lists, each containing N items. A combined list consists of top segments of various sizes from each base list so that the total size of all top segments equals N. A sequence of item requests is processed and the goal is to minimize the total number of misses. That is, we seek to build a combined list that contains all the frequently requested items. We first consider the special case of disjoint base lists. There, we design an efficient algorithm that computes the best combined list for a given sequence of requests. In addition, we develop a randomized online algorithm whose expected number of misses is close to that of the best combined list chosen in hindsight. We prove lower bounds that show that the expected number of misses of our randomized algorithm is close to the optimum. In the presence of duplicate items, we show that computing the best combined list is NP-hard. We show that our algorithms still apply to a linearized notion of loss in this case. We expect that this new way of aggregating lists will find many ranking applications.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
Effectiveness evaluation of aerospace fault-tolerant computing systems used in a phased-mission environment is rather tricky and difficult because of the interaction of its several degraded performance levels with the multiple objectives of the mission and the use environment. Part I uses an approach based on multiobjective phased-mission analysis to evaluate the effectiveness of a distributed avionics architecture used in a transport aircraft. Part II views the computing system as a multistate s-coherent structure. Lower bounds on the probabilities of accomplishing various levels of performance are evaluated.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
We propose a new type of high-order elements that incorporates the mesh-free Galerkin formulations into the framework of finite element method. Traditional polynomial interpolation is replaced by mesh-free interpolations in the present high-order elements, and the strain smoothing technique is used for integration of the governing equations based on smoothing cells. The properties of high-order elements, which are influenced by the basis function of mesh-free interpolations and boundary nodes, are discussed through numerical examples. It can be found that the basis function has significant influence on the computational accuracy and upper-lower bounds of energy norm, when the strain smoothing technique retains the softening phenomenon. This new type of high-order elements shows good performance when quadratic basis functions are used in the mesh-free interpolations and present elements prove advantageous in adaptive mesh and nodes refinement schemes. Furthermore, it shows less sensitive to the quality of element because it uses the mesh-free interpolations and obeys the Weakened Weak (W2) formulation as introduced in [3, 5].
Resumo:
We study diagonal estimates for the Bergman kernels of certain model domains in C-2 near boundary points that are of infinite type. To do so, we need a mild structural condition on the defining functions of interest that facilitates optimal upper and lower bounds. This is a mild condition; unlike earlier studies of this sort, we are able to make estimates for non-convex pseudoconvex domains as well. Thisn condition quantifies, in some sense, how flat a domain is at an infinite-type boundary point. In this scheme of quantification, the model domains considered below range-roughly speaking-from being mildly infinite-type'' to very flat at the infinite-type points.
Resumo:
We consider single-source, single-sink (ss-ss) multi-hop relay networks, with slow-fading Rayleigh links. This two part paper aims at giving explicit protocols and codes to achieve the optimal diversity-multiplexing tradeoff (DMT) of two classes of multi-hop networks: K-parallel-path (KPP) networks and Layered networks. While single-antenna KPP networks were the focus of the first part, we consider layered and multi-antenna networks in this second part. We prove that a linear DMT between the maximum diversity d(max). and the maximum multiplexing gain of 1 is achievable for single-antenna fully-connected layered networks under the half-duplex constraint. This is shown to be equal to the optimal DMT if the number of relaying layers is less than 4. For the multiple-antenna case, we provide an achievable DMT, which is significantly better than known lower bounds for half duplex networks. Along the way, we compute the DMT of parallel MIMO channels in terms of the DMT of the component channel. For arbitrary ss-ss single-antenna directed acyclic networks with full-duplex relays, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable using an amplify-and-forward (AF) protocol. Explicit short-block-length codes are provided for all the proposed protocols. Two key implications of the results in the two-part paper are that the half-duplex constraint does not necessarily entail rate loss by a factor of two as previously believed and that simple AN protocols are often sufficient to attain the best possible DMT.
Resumo:
We consider single-source, single-sink multi-hop relay networks, with slow-fading Rayleigh fading links and single-antenna relay nodes operating under the half-duplex constraint. While two hop relay networks have been studied in great detail in terms of the diversity-multiplexing tradeoff (DMT), few results are available for more general networks. In this two-part paper, we identify two families of networks that are multi-hop generalizations of the two hop network: K-Parallel-Path (KPP) networks and Layered networks. In the first part, we initially consider KPP networks, which can be viewed as the union of K node-disjoint parallel paths, each of length > 1. The results are then generalized to KPP(I) networks, which permit interference between paths and to KPP(D) networks, which possess a direct link from source to sink. We characterize the optimal DMT of KPP(D) networks with K >= 4, and KPP(I) networks with K >= 3. Along the way, we derive lower bounds for the DMT of triangular channel matrices, which are useful in DMT computation of various protocols. As a special case, the DMT of two-hop relay network without direct link is obtained. Two key implications of the results in the two-part paper are that the half-duplex constraint does not necessarily entail rate loss by a factor of two, as previously believed and that, simple AF protocols are often sufficient to attain the best possible DMT.
Resumo:
We revisit four generations within the context of supersymmetry. Wecompute the perturbativity limits for the fourth generation Yukawa couplings and show that if the masses of the fourth generation lie within reasonable limits of their present experimental lower bounds, it is possible to have perturbativity only up to scales around 1000 TeV. Such low scales are ideally suited to incorporate gauge mediated supersymmetry breaking, where the mediation scale can be as low as 10-20 TeV. The minimal messenger model, however, is highly constrained. While lack of electroweak symmetry breaking rules out a large part of the parameter space, a small region exists, where the fourth generation stau is tachyonic. General gauge mediation with its broader set of boundary conditions is better suited to accommodate the fourth generation.
Resumo:
For p x n complex orthogonal designs in k variables, where p is the number of channels uses and n is the number of transmit antennas, the maximal rate L of the design is asymptotically half as n increases. But, for such maximal rate codes, the decoding delay p increases exponentially. To control the delay, if we put the restriction that p = n, i.e., consider only the square designs, then, the rate decreases exponentially as n increases. This necessitates the study of the maximal rate of the designs with restrictions of the form p = n+1, p = n+2, p = n+3 etc. In this paper, we study the maximal rate of complex orthogonal designs with the restrictions p = n+1 and p = n+2. We derive upper and lower bounds for the maximal rate for p = n+1 and p = n+2. Also for the case of p = n+1, we show that if the orthogonal design admit only the variables, their negatives and multiples of these by root-1 and zeros as the entries of the matrix (other complex linear combinations are not allowed), then the maximal rate always equals the lower bound.
Resumo:
The general method earlier developed by the writers for obtaining valid lower bound solutions to slabs under uniformly distributed load and supported along all edges is extended to the slabs with a free edge. Lower bound solutions with normal moment criterion are presented for six cases of orthotropically reinforced slabs, with one of the short edges being free and the other three edges being any combination of fixed and simply supported conditions. The expressions for moment field and collapse load are given for each slab. The lower bounds have been compared with the corresponding upper bound values obtained from the yield line theory with simple straight yield line modes of failure. They are also compared with Nielsen’s solutions available for two cases with isotropic reinforcement.
Resumo:
It is well known that n-length stabilizer quantum error correcting codes (QECCs) can be obtained via n-length classical error correction codes (CECCs) over GF(4), that are additive and self-orthogonal with respect to the trace Hermitian inner product. But, most of the CECCs have been studied with respect to the Euclidean inner product. In this paper, it is shown that n-length stabilizer QECCs can be constructed via 371 length linear CECCs over GF(2) that are self-orthogonal with respect to the Euclidean inner product. This facilitates usage of the widely studied self-orthogonal CECCs to construct stabilizer QECCs. Moreover, classical, binary, self-orthogonal cyclic codes have been used to obtain stabilizer QECCs with guaranteed quantum error correcting capability. This is facilitated by the fact that (i) self-orthogonal, binary cyclic codes are easily identified using transform approach and (ii) for such codes lower bounds on the minimum Hamming distance are known. Several explicit codes are constructed including two pure MDS QECCs.
Resumo:
An edge dominating set for a graph G is a set D of edges such that each edge of G is in D or adjacent to at least one edge in D. This work studies deterministic distributed approximation algorithms for finding minimum-size edge dominating sets. The focus is on anonymous port-numbered networks: there are no unique identifiers, but a node of degree d can refer to its neighbours by integers 1, 2, ..., d. The present work shows that in the port-numbering model, edge dominating sets can be approximated as follows: in d-regular graphs, to within 4 − 6/(d + 1) for an odd d and to within 4 − 2/d for an even d; and in graphs with maximum degree Δ, to within 4 − 2/(Δ − 1) for an odd Δ and to within 4 − 2/Δ for an even Δ. These approximation ratios are tight for all values of d and Δ: there are matching lower bounds.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
This work is a case study of applying nonparametric statistical methods to corpus data. We show how to use ideas from permutation testing to answer linguistic questions related to morphological productivity and type richness. In particular, we study the use of the suffixes -ity and -ness in the 17th-century part of the Corpus of Early English Correspondence within the framework of historical sociolinguistics. Our hypothesis is that the productivity of -ity, as measured by type counts, is significantly low in letters written by women. To test such hypotheses, and to facilitate exploratory data analysis, we take the approach of computing accumulation curves for types and hapax legomena. We have developed an open source computer program which uses Monte Carlo sampling to compute the upper and lower bounds of these curves for one or more levels of statistical significance. By comparing the type accumulation from women’s letters with the bounds, we are able to confirm our hypothesis.