998 resultados para tr SEIRAS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

One-and two-dimensional cellular automata which are known to be fault-tolerant are very complex. On the other hand, only very simple cellular automata have actually been proven to lack fault-tolerance, i.e., to be mixing. The latter either have large noise probability ε or belong to the small family of two-state nearest-neighbor monotonic rules which includes local majority voting. For a certain simple automaton L called the soldiers rule, this problem has intrigued researchers for the last two decades since L is clearly more robust than local voting: in the absence of noise, L eliminates any finite island of perturbation from an initial configuration of all 0's or all 1's. The same holds for a 4-state monotonic variant of L, K, called two-line voting. We will prove that the probabilistic cellular automata Kε and Lε asymptotically lose all information about their initial state when subject to small, strongly biased noise. The mixing property trivially implies that the systems are ergodic. The finite-time information-retaining quality of a mixing system can be represented by its relaxation time Relax(⋅), which measures the time before the onset of significant information loss. This is known to grow as (1/ε)^c for noisy local voting. The impressive error-correction ability of L has prompted some researchers to conjecture that Relax(Lε) = 2^(c/ε). We prove the tight bound 2^(c1log^21/ε) < Relax(Lε) < 2^(c2log^21/ε) for a biased error model. The same holds for Kε. Moreover, the lower bound is independent of the bias assumption. The strong bias assumption makes it possible to apply sparsity/renormalization techniques, the main tools of our investigation, used earlier in the opposite context of proving fault-tolerance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Load balancing is often used to ensure that nodes in a distributed systems are equally loaded. In this paper, we show that for real-time systems, load balancing is not desirable. In particular, we propose a new load-profiling strategy that allows the nodes of a distributed system to be unequally loaded. Using load profiling, the system attempts to distribute the load amongst its nodes so as to maximize the chances of finding a node that would satisfy the computational needs of incoming real-time tasks. To that end, we describe and evaluate a distributed load-profiling protocol for dynamically scheduling time-constrained tasks in a loosely-coupled distributed environment. When a task is submitted to a node, the scheduling software tries to schedule the task locally so as to meet its deadline. If that is not feasible, it tries to locate another node where this could be done with a high probability of success, while attempting to maintain an overall load profile for the system. Nodes in the system inform each other about their state using a combination of multicasting and gossiping. The performance of the proposed protocol is evaluated via simulation, and is contrasted to other dynamic scheduling protocols for real-time distributed systems. Based on our findings, we argue that keeping a diverse availability profile and using passive bidding (through gossiping) are both advantageous to distributed scheduling for real-time systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of analyzing performance of WWW servers. The web has experienced a phenomenal growth and has become the most popular Internet application. As a consequence of its large popularity, the Internet has suffered from various performance problems, such as network congestion and overloaded servers. These days, it is not uncommon to find servers refusing connections because they are overloaded. Performance has always been a key issue in the design and operation of on-line systems. With regard to Internet, performance is also critical, because users want fast and easy access to all objects (i.e., documents, pictures, audio, and video) available on the net. Thus, it is important to understand WWW performance issues. This paper focuses on the performance analysis of a Web server. Using a synthetic benchmark (WebStone), we analyze three different Web server software running on top of a Windows NT platform and performing some typical WWW tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We define a unification problem ^UP with the property that, given a pure lambda-term M, we can derive an instance Gamma(M) of ^UP from M such that Gamma(M) has a solution if and only if M is beta-strongly normalizable. There is a type discipline for pure lambda-terms that characterizes beta-strong normalization; this is the system of intersection types (without a "top" type that can be assigned to every lambda-term). In this report, we use a lean version LAMBDA of the usual system of intersection types. Hence, ^UP is also an appropriate unification problem to characterize typability of lambda-terms in LAMBDA. It also follows that ^UP is an undecidable problem, which can in turn be related to semi-unification and second-order unification (both known to be undecidable).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We generalize the well-known pebble game to infinite dag's, and we use this generalization to give new and shorter proofs of results in different areas of computer science (as diverse as "logic of programs" and "formal language theory"). Our applications here include a proof of a theorem due to Salomaa, asserting the existence of a context-free language with infinite index, and a proof of a theorem due to Tiuryn and Erimbetov, asserting that unbounded memory increases the power of logics of programs. The original proofs by Salomaa, Tiuryn, and Erimbetov, are fairly technical. The proofs by Tiuryn and Erimbetov also involve advanced techniques of model theory, namely, back-and-forth constructions based on a variant of Ehrenfeucht-Fraisse games. By contrast, our proofs are not only shorter, but also elementary. All we need is essentially finite induction and, in the case of the Tiuryn-Erimbetov result, the compactness and completeness of first-order logic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

If every lambda-abstraction in a lambda-term M binds at most one variable occurrence, then M is said to be "linear". Many questions about linear lambda-terms are relatively easy to answer, e.g. they all are beta-strongly normalizing and all are simply-typable. We extend the syntax of the standard lambda-calculus L to a non-standard lambda-calculus L^ satisfying a linearity condition generalizing the notion in the standard case. Specifically, in L^ a subterm Q of a term M can be applied to several subterms R1,...,Rk in parallel, which we write as (Q. R1 \wedge ... \wedge Rk). The appropriate notion of beta-reduction beta^ for the calculus L^ is such that, if Q is the lambda-abstraction (\lambda x.P) with m\geq 0 bound occurrences of x, the reduction can be carried out provided k = max(m,1). Every M in L^ is thus beta^-SN. We relate standard beta-reduction and non-standard beta^-reduction in several different ways, and draw several consequences, e.g. a new simple proof for the fact that a standard term M is beta-SN iff M can be assigned a so-called "intersection" type ("top" type disallowed).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

System F is the well-known polymorphically-typed λ-calculus with universal quantifiers ("∀"). F+η is System F extended with the eta rule, which says that if term M can be given type τ and M η-reduces to N, then N can also be given the type τ. Adding the eta rule to System F is equivalent to adding the subsumption rule using the subtyping ("containment") relation that Mitchell defined and axiomatized [Mit88]. The subsumption rule says that if M can be given type τ and τ is a subtype of type σ, then M can be given type σ. Mitchell's subtyping relation involves no extensions to the syntax of types, i.e., no bounded polymorphism and no supertype of all types, and is thus unrelated to the system F≤("F-sub"). Typability for F+η is the problem of determining for any term M whether there is any type τ that can be given to it using the type inference rules of F+η. Typability has been proven undecidable for System F [Wel94] (without the eta rule), but the decidability of typability has been an open problem for F+η. Mitchell's subtyping relation has recently been proven undecidable [TU95, Wel95b], implying the undecidability of "type checking" for F+η. This paper reduces the problem of subtyping to the problem of typability for F+η, thus proving the undecidability of typability. The proof methods are similar in outline to those used to prove the undecidability of typability for System F, but the fine details differ greatly.