961 resultados para Problem formulation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The incidence of blue-green algal blooms and surface scum-formation are certainly not new phenomena. Many British and European authors have been faithfully describing the unmistakable symptoms of blue-green algal scums for over 800 years. There is no disputing that blue-green algal toxins are extremely harmful. Three quite separate categories of compound have been separated: neurotoxins; hepatotoxins and lipopolysaccharides. There is a popular association between blue-green algae and eutrophication. Certainly the main nuisance species - of Microcystis, Anabaena and Aphanizomenon are rare in oligotrophic lakes and reservoirs. Several approaches have been proposed for the control of blue-green algae. Distinction is made between methods for discharging algae already present (eg algicides; straw bales; viruses; parasitic fungi and herbivorous ciliates), and methods for averting an anticipated abundance in the future (phosphorous control, artificial circulation etc).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Observations of individual weight, duration of development and production of different stages of Tropodiaptomus incognitus are presented. The study is based on data gathered from Lake Chad in 1968.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In freshwater environments of modest size and without notable ecological structure, there is usually present only one diaptomid species. When two or more diaptomid species are present in the same habitat, generally their body dimensions are distinctly different. There are only four examples of co-existence of Arctodiaptomus bacillifer (Koelb.) and Acanthodiaptomus denticornis (Wierz.) situated at higher altitudes alpine lakes. The article discusses the results of sampling in the summer of 1953 and the problem of the co-existence of Arctodiaptomus bacillifer, Acanthodiaptomus denticornis and Heterocope saliens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An attempt is made to provide a theoretical explanation of the effect of the positive column on the voltage-current characteristic of a glow or an arc discharge. Such theories have been developed before, and all are based on balancing the production and loss of charged particles and accounting for the energy supplied to the plasma by the applied electric field. Differences among the theories arise from the approximations and omissions made in selecting processes that affect the particle and energy balances. This work is primarily concerned with the deviation from the ambipolar description of the positive column caused by space charge, electron-ion volume recombination, and temperature inhomogeneities.

The presentation is divided into three parts, the first of which involved the derivation of the final macroscopic equations from kinetic theory. The final equations are obtained by taking the first three moments of the Boltzmann equation for each of the three species in the plasma. Although the method used and the equations obtained are not novel, the derivation is carried out in detail in order to appraise the validity of numerous approximations and to justify the use of data from other sources. The equations are applied to a molecular hydrogen discharge contained between parallel walls. The applied electric field is parallel to the walls, and the dependent variables—electron and ion flux to the walls, electron and ion densities, transverse electric field, and gas temperature—vary only in the direction perpendicular to the walls. The mathematical description is given by a sixth-order nonlinear two-point boundary value problem which contains the applied field as a parameter. The amount of neutral gas and its temperature at the walls are held fixed, and the relation between the applied field and the electron density at the center of the discharge is obtained in the process of solving the problem. This relation corresponds to that between current and voltage and is used to interpret the effect of space charge, recombination, and temperature inhomogeneities on the voltage-current characteristic of the discharge.

The complete solution of the equations is impractical both numerically and analytically, and in Part II the gas temperature is assumed uniform so as to focus on the combined effects of space charge and recombination. The terms representing these effects are treated as perturbations to equations that would otherwise describe the ambipolar situation. However, the term representing space charge is not negligible in a thin boundary layer or sheath near the walls, and consequently the perturbation problem is singular. Separate solutions must be obtained in the sheath and in the main region of the discharge, and the relation between the electron density and the applied field is not determined until these solutions are matched.

In Part III the electron and ion densities are assumed equal, and the complicated space-charge calculation is thereby replaced by the ambipolar description. Recombination and temperature inhomogeneities are both important at high values of the electron density. However, the formulation of the problem permits a comparison of the relative effects, and temperature inhomogeneities are shown to be important at lower values of the electron density than recombination. The equations are solved by a direct numerical integration and by treating the term representing temperature inhomogeneities as a perturbation.

The conclusions reached in the study are primarily concerned with the association of the relation between electron density and axial field with the voltage-current characteristic. It is known that the effect of space charge can account for the subnormal glow discharge and that the normal glow corresponds to a close approach to an ambipolar situation. The effect of temperature inhomogeneities helps explain the decreasing characteristic of the arc, and the effect of recombination is not expected to appear except at very high electron densities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An equation for the reflection which results when an expanding dielectric slab scatters normally incident plane electromagnetic waves is derived using the invariant imbedding concept. The equation is solved approximately and the character of the solution is investigated. Also, an equation for the radiation transmitted through such a slab is similarly obtained. An alternative formulation of the slab problem is presented which is applicable to the analogous problem in spherical geometry. The form of an equation for the modal reflections from a nonrelativistically expanding sphere is obtained and some salient features of the solution are described. In all cases the material is assumed to be a nondispersive, nonmagnetic dielectric whose rest frame properties are slowly varying.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

No abstract.