937 resultados para Data dissemination and sharing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Commercially available software packages for IBM PC-compatibles are evaluated to use for data acquisition and processing work. Moss Landing Marine Laboratories (MLML) acquired computers since 1978 to use on shipboard data acquisition (Le. CTD, radiometric, etc.) and data processing. First Hewlett-Packard desktops were used then a transition to the DEC VAXstations, with software developed mostly by the author and others at MLML (Broenkow and Reaves, 1993; Feinholz and Broenkow, 1993; Broenkow et al, 1993). IBM PC were at first very slow and limited in available software, so they were not used in the early days. Improved technology such as higher speed microprocessors and a wide range of commercially available software made use of PC more reasonable today. MLML is making a transition towards using the PC for data acquisition and processing. Advantages are portability and available outside support.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Defining types of seafloor substrate and relating them to the distribution of fish and invertebrates is an important but difficult goal. An examination of the processing steps of a commercial acoustics analyzing software program, as well as the data values produced by the proprietary first echo measurements, revealed potential benef its and drawbacks for distinguishing acoustically distinct seafloor substrates. The positive aspects were convenient processing steps such as gain adjustment, accurate bottom picking, ease of bad data exclusion, and the ability to average across successive pings in order to increase the signal-to-noise ratio. A noteworthy drawback with the processing was the potential for accidental inclusion of a second echo as if it were part of the first echo. Detailed examination of the echogram measurements quantified the amount of collinearity, revealed the lack of standardization (subtraction of mean, division by standard deviation) before principal components analysis (PCA), and showed correlations of individual echogram measurements with depth and seafloor slope. Despite the facility of the software, these previously unknown processing pitfalls and echogram measurement characteristics may have created data artifacts that generated user-derived substrate classifications, rather than actual seafloor substrate types.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a new, compact derivation of state-space formulae for the so-called discretisation-based solution of the H∞ sampled-data control problem. Our approach is based on the established technique of continuous time-lifting, which is used to isometrically map the continuous-time, linear, periodically time-varying, sampled-data problem to a discretetime, linear, time-invariant problem. State-space formulae are derived for the equivalent, discrete-time problem by solving a set of two-point, boundary-value problems. The formulae accommodate a direct feed-through term from the disturbance inputs to the controlled outputs of the original plant and are simple, requiring the computation of only a single matrix exponential. It is also shown that the resultant formulae can be easily re-structured to give a numerically robust algorithm for computing the state-space matrices. © 1997 Elsevier Science Ltd. All rights reserved.