963 resultados para Running time


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Content Distribution Networks (CDNs) are widely used to distribute data to large number of users. Traditionally, content is being replicated among a number of surrogate servers, leading to high operational costs. In this context, Peer-to-Peer (P2P) CDNs have emerged as a viable alternative. An issue of concern in P2P networks is that of free riders, i.e., selfish peers who download files and leave without uploading anything in return. Free riding must be discouraged. In this paper, we propose a criterion, the Give-and-Take (G&T) criterion, that disallows free riders. Incorporating the G&T criterion in our model, we study a problem that arises naturally when a new peer enters the system: viz., the problem of downloading a `universe' of segments, scattered among other peers, at low cost. We analyse this hard problem, and characterize the optimal download cost under the G&T criterion. We propose an optimal algorithm, and provide a sub-optimal algorithm that is nearly optimal, but runs much more quickly; this provides an attractive balance between running time and performance. Finally, we compare the performance of our algorithms with that of a few existing P2P downloading strategies in use. We also study the computation time for prescribing the strategy for initial segment and peer selection for the newly arrived peer for various existing and proposed algorithms, and quantify cost-computation time trade-offs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We address the parameterized complexity ofMaxColorable Induced Subgraph on perfect graphs. The problem asks for a maximum sized q-colorable induced subgraph of an input graph G. Yannakakis and Gavril IPL 1987] showed that this problem is NP-complete even on split graphs if q is part of input, but gave a n(O(q)) algorithm on chordal graphs. We first observe that the problem is W2]-hard parameterized by q, even on split graphs. However, when parameterized by l, the number of vertices in the solution, we give two fixed-parameter tractable algorithms. The first algorithm runs in time 5.44(l) (n+#alpha(G))(O(1)) where #alpha(G) is the number of maximal independent sets of the input graph. The second algorithm runs in time q(l+o()l())n(O(1))T(alpha) where T-alpha is the time required to find a maximum independent set in any induced subgraph of G. The first algorithm is efficient when the input graph contains only polynomially many maximal independent sets; for example split graphs and co-chordal graphs. The running time of the second algorithm is FPT in l alone (whenever T-alpha is a polynomial in n), since q <= l for all non-trivial situations. Finally, we show that (under standard complexitytheoretic assumptions) the problem does not admit a polynomial kernel on split and perfect graphs in the following sense: (a) On split graphs, we do not expect a polynomial kernel if q is a part of the input. (b) On perfect graphs, we do not expect a polynomial kernel even for fixed values of q >= 2.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of finding an optimal vertex cover in a graph is a classic NP-complete problem, and is a special case of the hitting set question. On the other hand, the hitting set problem, when asked in the context of induced geometric objects, often turns out to be exactly the vertex cover problem on restricted classes of graphs. In this work we explore a particular instance of such a phenomenon. We consider the problem of hitting all axis-parallel slabs induced by a point set P, and show that it is equivalent to the problem of finding a vertex cover on a graph whose edge set is the union of two Hamiltonian Paths. We show the latter problem to be NP-complete, and also give an algorithm to find a vertex cover of size at most k, on graphs of maximum degree four, whose running time is 1.2637(k) n(O(1)).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we consider polynomial representability of functions defined over , where p is a prime and n is a positive integer. Our aim is to provide an algorithmic characterization that (i) answers the decision problem: to determine whether a given function over is polynomially representable or not, and (ii) finds the polynomial if it is polynomially representable. The previous characterizations given by Kempner (Trans. Am. Math. Soc. 22(2):240-266, 1921) and Carlitz (Acta Arith. 9(1), 67-78, 1964) are existential in nature and only lead to an exhaustive search method, i.e. algorithm with complexity exponential in size of the input. Our characterization leads to an algorithm whose running time is linear in size of input. We also extend our result to the multivariate case.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, we study the well-known r-DIMENSIONAL k-MATCHING ((r, k)-DM), and r-SET k-PACKING ((r, k)-SP) problems. Given a universe U := U-1 ... U-r and an r-uniform family F subset of U-1 x ... x U-r, the (r, k)-DM problem asks if F admits a collection of k mutually disjoint sets. Given a universe U and an r-uniform family F subset of 2(U), the (r, k)-SP problem asks if F admits a collection of k mutually disjoint sets. We employ techniques based on dynamic programming and representative families. This leads to a deterministic algorithm with running time O(2.851((r-1)k) .vertical bar F vertical bar. n log(2)n . logW) for the weighted version of (r, k)-DM, where W is the maximum weight in the input, and a deterministic algorithm with running time O(2.851((r-0.5501)k).vertical bar F vertical bar.n log(2) n . logW) for the weighted version of (r, k)-SP. Thus, we significantly improve the previous best known deterministic running times for (r, k)-DM and (r, k)-SP and the previous best known running times for their weighted versions. We rely on structural properties of (r, k)-DM and (r, k)-SP to develop algorithms that are faster than those that can be obtained by a standard use of representative sets. Incorporating the principles of iterative expansion, we obtain a better algorithm for (3, k)-DM, running in time O(2.004(3k).vertical bar F vertical bar . n log(2)n). We believe that this algorithm demonstrates an interesting application of representative families in conjunction with more traditional techniques. Furthermore, we present kernels of size O(e(r)r(k-1)(r) logW) for the weighted versions of (r, k)-DM and (r, k)-SP, improving the previous best known kernels of size O(r!r(k-1)(r) logW) for these problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The melting process of nickel nanowires are simulated by using molecular dynamics with the quantum Sutten-Chen many-body force field. The wires studied were approximately cylindrical in cross-section and periodic boundary conditions were applied along their length; the atoms were arranged initially in a face-centred cubic structure with the [0 0 1] direction parallel to the long axis of the wire. The size effects of the nanowires on the melting temperatures are investigated. We find that for the nanoscale regime, the melting temperatures of Ni nanowires are much lower than that of the bulk and are linear with the reciprocal of the diameter of the nanowire. When a nanowire is heated up above the melting temperature, the neck of the nanowire begins to arise and the diameter of neck decreases rapidly with the equilibrated running time. Finally, the breaking of nanowire arises, which leads to the formation of the spherical clusters. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

报道了千瓦级激光二极管面阵抽运固体热容激光器的理论与实验研究, 分别采用Nd:YAG单板条和双板条串接的热容激光器, 利用热容激光器的理论计算模型计算了在一定的工作时间内激光输出特性, 并进行了实验验证。实验中采用的晶体尺寸均为59 mm×40 mm×4.5 mm, 对单板条进行抽运时平均功率大约为5.6 kW, 双板条串接时为11.2 kW, 重复频率为1 kHz, 占空比为20%。实验中观察了1 s的工作时间内脉冲能量输出的波动情况, 单板条时单脉冲能量输出最大为1.3 J, 在1 s后单脉冲能量输出

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A kilowatt diode-pumped solid state heat capacity laser is fabricated with a double-slab Nd:YAG. Using the theoretical model of heat capacity laser output laser characteristics, the relationships between the output power, temperature and time are obtained. The slab is 59 x 40 4.5mm(3) in size. The average pump power is 11.2kW, the repetition rate is 1kHz, and the duty cycle 20%. During the running time of 1s, the output energy of the laser has a fluctuation with the maximal output energy at 2.06J, and the maximal output average power is 2.06kW. At the end of the second, the output energy declines to about 50% compared to the beginning. The thermal effects can be improved with one slab cooled by water. The experimental results are consistent with calculation data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A proposta deste trabalho é estudar a tecnologia para a remoção de amônia do lixiviado através do processo físico-químico de arraste com ar e a sua caracterização por processos de fracionamento com membranas de MF e UF. Foram analisados no processo de arraste de ar os parâmetros pH, vazão de ar e tempo de operação. Foi verificada a remoção de nitrogênio amoniacal de 93,5% em um tempo de operação de 6 horas, com ajuste de pH igual a 11 e vazão de ar 100 L/h. Após a remoção do nitrogênio amoniacal, os lixiviados foram submetidos a processo de fracionamento com membranas de microfiltração (MF) e ultrafiltração (UF), sendo investigadas a remoção de amônia, a condutividade, DQO, COD, cloreto e pH. Obtiveram-se resultados praticamente constantes à medida que o lixiviado permeou nas membranas de MF e UF. Ademais, empregaram-se testes de toxicidade e ensaios de tratabilidade biológica com amostras de lixiviado bruto, lixiviado tratado (baixa concentração de amônia) e lixiviados fracionados com membranas de MF e UF. Nos ensaios de tratabilidade biológica os resultados mostraram que não houve uma remoção significativa de matéria orgânica e nos testes de toxicidade com organismos Danio rerio, embora tenha ocorrido uma redução na toxicidade na sequência dos experimentos, foi constatado que o lixiviado bruto, lixiviado tratado com remoção de amônia e fracionados com membranas de MF e UF mantiveram elevada toxicidade.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Argon gas with simple atomic structure and favorite arcing stability at low input power was used as the propellant. The thruster with a regeneratively cooled nozzle were tested in a vacuum system capable of keeping the chamber pressure at about 10 Pa at a propellant feeding rate of 5 slm. Arc current, arc voltage, thrust, nozzle temperature and propellant feeding rate were measured in situ simultaneously. Effects of the working parameters such as the propellant feeding rate and arc current on the thruster performances, mainly the produced thrust, specific impulse and thrust efficiency, were examined. The variation of arc volt-ampere characteristics with running time and the effect of nozzle temperature on thruster property are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

针对超大规模集成电路和片上系统设计中确定异步FIFO浓度的问题,根据异步FIFO运行时的属性提出FIFO动态参数模型,该模型包括FIFO饱和度、写入端和读出端数据传输率及上溢/下溢频率。在该模型的基础之上,分析异步FIFO的深度与动态参数之间的关系,采用功能仿真方法确定片上系统中异步模块之间数据传输所需FIFO的深度。对典型实例的分析表明,采用这种方法能够在保证系统数据通信性能的前提下,获得最小的FIFO深度,优化系统资源的使用。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of discovering frequent poly-regions (i.e. regions of high occurrence of a set of items or patterns of a given alphabet) in a sequence is studied, and three efficient approaches are proposed to solve it. The first one is entropy-based and applies a recursive segmentation technique that produces a set of candidate segments which may potentially lead to a poly-region. The key idea of the second approach is the use of a set of sliding windows over the sequence. Each sliding window covers a sequence segment and keeps a set of statistics that mainly include the number of occurrences of each item or pattern in that segment. Combining these statistics efficiently yields the complete set of poly-regions in the given sequence. The third approach applies a technique based on the majority vote, achieving linear running time with a minimal number of false negatives. After identifying the poly-regions, the sequence is converted to a sequence of labeled intervals (each one corresponding to a poly-region). An efficient algorithm for mining frequent arrangements of intervals is applied to the converted sequence to discover frequently occurring arrangements of poly-regions in different parts of DNA, including coding regions. The proposed algorithms are tested on various DNA sequences producing results of significant biological meaning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.