293 resultados para Cataloging of technical reports.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system is described that tracks moving objects in a video dataset so as to extract a representation of the objects' 3D trajectories. The system then finds hierarchical clusters of similar trajectories in the video dataset. Objects' motion trajectories are extracted via an EKF formulation that provides each object's 3D trajectory up to a constant factor. To increase accuracy when occlusions occur, multiple tracking hypotheses are followed. For trajectory-based clustering and retrieval, a modified version of edit distance, called longest common subsequence (LCSS) is employed. Similarities are computed between projections of trajectories on coordinate axes. Trajectories are grouped based, using an agglomerative clustering algorithm. To check the validity of the approach, experiments using real data were performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Science of Network Service Composition has clearly emerged as one of the grand themes driving many of our research questions in the networking field today [NeXtworking 2003]. This driving force stems from the rise of sophisticated applications and new networking paradigms. By "service composition" we mean that the performance and correctness properties local to the various constituent components of a service can be readily composed into global (end-to-end) properties without re-analyzing any of the constituent components in isolation, or as part of the whole composite service. The set of laws that would govern such composition is what will constitute that new science of composition. The combined heterogeneity and dynamic open nature of network systems makes composition quite challenging, and thus programming network services has been largely inaccessible to the average user. We identify (and outline) a research agenda in which we aim to develop a specification language that is expressive enough to describe different components of a network service, and that will include type hierarchies inspired by type systems in general programming languages that enable the safe composition of software components. We envision this new science of composition to be built upon several theories (e.g., control theory, game theory, network calculus, percolation theory, economics, queuing theory). In essence, different theories may provide different languages by which certain properties of system components can be expressed and composed into larger systems. We then seek to lift these lower-level specifications to a higher level by abstracting away details that are irrelevant for safe composition at the higher level, thus making theories scalable and useful to the average user. In this paper we focus on services built upon an overlay management architecture, and we use control theory and QoS theory as example theories from which we lift up compositional specifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We demonstrate that if two probability distributions D and E of sufficiently small min-entropy have statistical difference ε, then the direct-product distributions D^l and E^l have statistical difference at least roughly ε\s√l, provided that l is sufficiently small, smaller than roughly ε^{4/3}. Previously known bounds did not work for few repetitions l, requiring l>ε^2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two polymorphic types σ and τ are said to be bicoercible if there is a coercion from σ to τ and conversely. We give a complete equational axiomatization of bicoercible types and prove that the relation of bicoercibility is decidable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is an addendum to our technical report BUCS TR-94-014 of December 19, 1994. It clarifies some statements, adds information on some related research, includes a comparison with research be de Groote, and fixes two minor mistakes in a proof.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are several proofs now for the stability of Toom's example of a two-dimensional stable cellular automaton and its application to fault-tolerant computation. Simon and Berman simplified and strengthened Toom's original proof: the present report is simplified exposition of their proof.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The explosion of WWW traffic necessitates an accurate picture of WWW use, and in particular requires a good understanding of client requests for WWW documents. To address this need, we have collected traces of actual executions of NCSA Mosaic, reflecting over half a million user requests for WWW documents. In this paper we describe the methods we used to collect our traces, and the formats of the collected data. Next, we present a descriptive statistical summary of the traces we collected, which identifies a number of trends and reference patterns in WWW use. In particular, we show that many characteristics of WWW use can be modelled using power-law distributions, including the distribution of document sizes, the popularity of documents as a function of size, the distribution of user requests for documents, and the number of references to documents as a function of their overall rank in popularity (Zipf's law). Finally, we show how the power-law distributions derived from our traces can be used to guide system designers interested in caching WWW documents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider a network of processors (sites) in which each site x has a finite set N(x) of neighbors. There is a transition function f that for each site x computes the next state ξ(x) from the states in N(x). But these transitions (updates) are applied in arbitrary order, one or many at a time. If the state of site x at time t is η(x; t) then let us define the sequence ζ(x; 0); ζ(x; 1), ... by taking the sequence η(x; 0),η(x; 1), ... , and deleting each repetition, i.e. each element equal to the preceding one. The function f is said to have invariant histories if the sequence ζ(x; i), (while it lasts, in case it is finite) depends only on the initial configuration, not on the order of updates. This paper shows that though the invariant history property is typically undecidable, there is a useful simple sufficient condition, called commutativity: For any configuration, for any pair x; y of neighbors, if the updating would change both ξ(x) and ξ(y) then the result of updating first x and then y is the same as the result of doing this in the reverse order. This fact is derivable from known results on the confluence of term-rewriting systems but the self-contained proof given here may be justifiable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mitchell defined and axiomatized a subtyping relationship (also known as containment, coercibility, or subsumption) over the types of System F (with "→" and "∀"). This subtyping relationship is quite simple and does not involve bounded quantification. Tiuryn and Urzyczyn quite recently proved this subtyping relationship to be undecidable. This paper supplies a new undecidability proof for this subtyping relationship. First, a new syntax-directed axiomatization of the subtyping relationship is defined. Then, this axiomatization is used to prove a reduction from the undecidable problem of semi-unification to subtyping. The undecidability of subtyping implies the undecidability of type checking for System F extended with Mitchell's subtyping, also known as "F plus eta".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The World Wide Web (WWW or Web) is growing rapidly on the Internet. Web users want fast response time and easy access to a enormous variety of information across the world. Thus, performance is becoming a main issue in the Web. Fractals have been used to study fluctuating phenomena in many different disciplines, from the distribution of galaxies in astronomy to complex physiological control systems. The Web is also a complex, irregular, and random system. In this paper, we look at the document reference pattern at Internet Web servers and use fractal-based models to understand aspects (e.g. caching schemes) that affect the Web performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proliferation of inexpensive workstations and networks has created a new era in distributed computing. At the same time, non-traditional applications such as computer-aided design (CAD), computer-aided software engineering (CASE), geographic-information systems (GIS), and office-information systems (OIS) have placed increased demands for high-performance transaction processing on database systems. The combination of these factors gives rise to significant challenges in the design of modern database systems. In this thesis, we propose novel techniques whose aim is to improve the performance and scalability of these new database systems. These techniques exploit client resources through client-based transaction management. Client-based transaction management is realized by providing logging facilities locally even when data is shared in a global environment. This thesis presents several recovery algorithms which utilize client disks for storing recovery related information (i.e., log records). Our algorithms work with both coarse and fine-granularity locking and they do not require the merging of client logs at any time. Moreover, our algorithms support fine-granularity locking with multiple clients permitted to concurrently update different portions of the same database page. The database state is recovered correctly when there is a complex crash as well as when the updates performed by different clients on a page are not present on the disk version of the page, even though some of the updating transactions have committed. This thesis also presents the implementation of the proposed algorithms in a memory-mapped storage manager as well as a detailed performance study of these algorithms using the OO1 database benchmark. The performance results show that client-based logging is superior to traditional server-based logging. This is because client-based logging is an effective way to reduce dependencies on server CPU and disk resources and, thus, prevents the server from becoming a performance bottleneck as quickly when the number of clients accessing the database increases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One-and two-dimensional cellular automata which are known to be fault-tolerant are very complex. On the other hand, only very simple cellular automata have actually been proven to lack fault-tolerance, i.e., to be mixing. The latter either have large noise probability ε or belong to the small family of two-state nearest-neighbor monotonic rules which includes local majority voting. For a certain simple automaton L called the soldiers rule, this problem has intrigued researchers for the last two decades since L is clearly more robust than local voting: in the absence of noise, L eliminates any finite island of perturbation from an initial configuration of all 0's or all 1's. The same holds for a 4-state monotonic variant of L, K, called two-line voting. We will prove that the probabilistic cellular automata Kε and Lε asymptotically lose all information about their initial state when subject to small, strongly biased noise. The mixing property trivially implies that the systems are ergodic. The finite-time information-retaining quality of a mixing system can be represented by its relaxation time Relax(⋅), which measures the time before the onset of significant information loss. This is known to grow as (1/ε)^c for noisy local voting. The impressive error-correction ability of L has prompted some researchers to conjecture that Relax(Lε) = 2^(c/ε). We prove the tight bound 2^(c1log^21/ε) < Relax(Lε) < 2^(c2log^21/ε) for a biased error model. The same holds for Kε. Moreover, the lower bound is independent of the bias assumption. The strong bias assumption makes it possible to apply sparsity/renormalization techniques, the main tools of our investigation, used earlier in the opposite context of proving fault-tolerance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of analyzing performance of WWW servers. The web has experienced a phenomenal growth and has become the most popular Internet application. As a consequence of its large popularity, the Internet has suffered from various performance problems, such as network congestion and overloaded servers. These days, it is not uncommon to find servers refusing connections because they are overloaded. Performance has always been a key issue in the design and operation of on-line systems. With regard to Internet, performance is also critical, because users want fast and easy access to all objects (i.e., documents, pictures, audio, and video) available on the net. Thus, it is important to understand WWW performance issues. This paper focuses on the performance analysis of a Web server. Using a synthetic benchmark (WebStone), we analyze three different Web server software running on top of a Windows NT platform and performing some typical WWW tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

If every lambda-abstraction in a lambda-term M binds at most one variable occurrence, then M is said to be "linear". Many questions about linear lambda-terms are relatively easy to answer, e.g. they all are beta-strongly normalizing and all are simply-typable. We extend the syntax of the standard lambda-calculus L to a non-standard lambda-calculus L^ satisfying a linearity condition generalizing the notion in the standard case. Specifically, in L^ a subterm Q of a term M can be applied to several subterms R1,...,Rk in parallel, which we write as (Q. R1 \wedge ... \wedge Rk). The appropriate notion of beta-reduction beta^ for the calculus L^ is such that, if Q is the lambda-abstraction (\lambda x.P) with m\geq 0 bound occurrences of x, the reduction can be carried out provided k = max(m,1). Every M in L^ is thus beta^-SN. We relate standard beta-reduction and non-standard beta^-reduction in several different ways, and draw several consequences, e.g. a new simple proof for the fact that a standard term M is beta-SN iff M can be assigned a so-called "intersection" type ("top" type disallowed).