199 resultados para Lipschitzian bounds


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a typical sensor network scenario a goal is to monitor a spatio-temporal process through a number of inexpensive sensing nodes, the key parameter being the fidelity at which the process has to be estimated at distant locations. We study such a scenario in which multiple encoders transmit their correlated data at finite rates to a distant and common decoder. In particular, we derive inner and outer bounds on the rate region for the random field to be estimated with a given mean distortion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a problem of providing mean delay and average throughput guarantees in random access fading wireless channels using CSMA/CA algorithm. This problem becomes much more challenging when the scheduling is distributed as is the case in a typical local area wireless network. We model the CSMA network using a novel queueing network based approach. The optimal throughput per device and throughput optimal policy in an M device network is obtained. We provide a simple contention control algorithm that adapts the attempt probability based on the network load and obtain bounds for the packet transmission delay. The information we make use of is the number of devices in the network and the queue length (delayed) at each device. The proposed algorithms stay within the requirements of the IEEE 802.11 standard.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In terabit-density magnetic recording, several bits of data can be replaced by the values of their neighbors in the storage medium. As a result, errors in the medium are dependent on each other and also on the data written. We consider a simple 1-D combinatorial model of this medium. In our model, we assume a setting where binary data is sequentially written on the medium and a bit can erroneously change to the immediately preceding value. We derive several properties of codes that correct this type of errors, focusing on bounds on their cardinality. We also define a probabilistic finite-state channel model of the storage medium, and derive lower and upper estimates of its capacity. A lower bound is derived by evaluating the symmetric capacity of the channel, i.e., the maximum transmission rate under the assumption of the uniform input distribution of the channel. An upper bound is found by showing that the original channel is a stochastic degradation of another, related channel model whose capacity we can compute explicitly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spectral efficiency is a key characteristic of cellular communications systems, as it quantifies how well the scarce spectrum resource is utilized. It is influenced by the scheduling algorithm as well as the signal and interference statistics, which, in turn, depend on the propagation characteristics. In this paper we derive analytical expressions for the short-term and long-term channel-averaged spectral efficiencies of the round robin, greedy Max-SINR, and proportional fair schedulers, which are popular and cover a wide range of system performance and fairness trade-offs. A unified spectral efficiency analysis is developed to highlight the differences among these schedulers. The analysis is different from previous work in the literature in the following aspects: (i) it does not assume the co-channel interferers to be identically distributed, as is typical in realistic cellular layouts, (ii) it avoids the loose spectral efficiency bounds used in the literature, which only considered the worst case and best case locations of identical co-channel interferers, (iii) it explicitly includes the effect of multi-tier interferers in the cellular layout and uses a more accurate model for handling the total co-channel interference, and (iv) it captures the impact of using small modulation constellation sizes, which are typical of cellular standards. The analytical results are verified using extensive Monte Carlo simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Homogenization of partial differential equations is relatively a new area and has tremendous applications in various branches of engineering sciences like: material science,porous media, study of vibrations of thin structures, composite materials to name a few. Though the material scientists and others had reasonable idea about the homogenization process, it was lacking a good mathematical theory till early seventies. The first proper mathematical procedure was developed in the seventies and later in the last 30 years or so it has flourished in various ways both application wise and mathematically. This is not a full survey article and on the other hand we will not be concentrating on a specialized problem. Indeed, we do indicate certain specialized problems of our interest without much details and that is not the main theme of the article. I plan to give an introductory presentation with the aim of catering to a wider audience. We go through few examples to understand homogenization procedure in a general perspective together with applications. We also present various mathematical techniques available and if possible some details about some of the techniques. A possible definition of homogenization would be that it is a process of understanding a heterogeneous (in-homogeneous) media, where the heterogeneties are at the microscopic level, like in composite materials, by a homogeneous media. In other words, one would like to obtain a homogeneous description of a highly oscillating in-homogeneous media. We also present other generalizations to non linear problems, porous media and so on. Finally, we will like to see a closely related issue of optimal bounds which itself is an independent area of research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the diversity-multiplexing gain tradeoff (DMT) of single-source, single-sink (ss-ss), multihop relay networks having slow-fading links is studied. In particular, the two end-points of the DMT of ss-ss full-duplex networks are determined, by showing that the maximum achievable diversity gain is equal to the min-cut and that the maximum multiplexing gain is equal to the min-cut rank, the latter by using an operational connection to a deterministic network. Also included in the paper, are several results that aid in the computation of the DMT of networks operating under amplify-and-forward (AF) protocols. In particular, it is shown that the colored noise encountered in amplify-and-forward protocols can be treated as white for the purpose of DMT computation, lower bounds on the DMT of lower-triangular channel matrices are derived and the DMT of parallel MIMO channels is computed. All protocols appearing in the paper are explicit and rely only upon AF relaying. Half-duplex networks and explicit coding schemes are studied in a companion paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The throughput-optimal discrete-rate adaptation policy, when nodes are subject to constraints on the average power and bit error rate, is governed by a power control parameter, for which a closed-form characterization has remained an open problem. The parameter is essential in determining the rate adaptation thresholds and the transmit rate and power at any time, and ensuring adherence to the power constraint. We derive novel insightful bounds and approximations that characterize the power control parameter and the throughput in closed-form. The results are comprehensive as they apply to the general class of Nakagami-m (m >= 1) fading channels, which includes Rayleigh fading, uncoded and coded modulation, and single and multi-node systems with selection. The results are appealing as they are provably tight in the asymptotic large average power regime, and are designed and verified to be accurate even for smaller average powers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G be a simple, undirected, finite graph with vertex set V (G) and edge set E(G). A k-dimensional box is a Cartesian product of closed intervals [a(1), b(1)] x [a(2), b(2)] x ... x [a(k), b(k)]. The boxicity of G, box(G), is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes; i.e., each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset, where S is the ground set and P is a reflexive, antisymmetric and transitive binary relation on S. The dimension of P, dim(P), is the minimum integer t such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P; i.e., S is the vertex set and two vertices are adjacent if and only if they are comparable in P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. It immediately follows that if P is a height-2 poset, then box(G(P)) <= dim(P) <= 2box(G(P)) since the underlying comparability graph of a height-2 poset is a bipartite graph. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with the extended double cover of G, denoted as G(c): Note that G(c) is a bipartite graph with partite sets A and B which are copies of V (G) such that, corresponding to every u is an element of V (G), there are two vertices u(A) is an element of A and u(B) is an element of B and {u(A), v(B)} is an edge in G(c) if and only if either u = v or u is adjacent to v in G. Let P(c) be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P(c)) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension such as dim(P) = 2 tree width (G(P)) + 4, since boxicity of any graph is known to be at most its tree width + 2. In the other direction, using the already known bounds for partial order dimension we get the following: (1) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta), which is an improvement over the best-known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-is an element of)) for any is an element of > 0 unless NP = ZPP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports the results of employing an artificial bee colony search algorithm for synthesizing a mutually coupled lumped-parameter ladder-network representation of a transformer winding, starting from its measured magnitude frequency response. The existing bee colony algorithm is suitably adopted by appropriately defining constraints, inequalities, and bounds to restrict the search space and thereby ensure synthesis of a nearly unique ladder network corresponding to each frequency response. Ensuring near-uniqueness while constructing the reference circuit (i.e., representation of healthy winding) is the objective. Furthermore, the synthesized circuits must exhibit physical realizability. The proposed method is easy to implement, time efficient, and problems associated with the supply of initial guess in existing methods are circumvented. Experimental results are reported on two types of actual, single, and isolated transformer windings (continuous disc and interleaved disc).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a new approach for solving the state estimation problem. The approach is aimed at producing a robust estimator that rejects bad data, even if they are associated with leverage-point measurements. This is achieved by solving a sequence of Linear Programming (LP) problems. Optimization is carried via a new algorithm which is a combination of “upper bound optimization technique" and “an improved algorithm for discrete linear approximation". In this formulation of the LP problem, in addition to the constraints corresponding to the measurement set, constraints corresponding to bounds of state variables are also involved, which enables the LP problem more efficient in rejecting bad data, even if they are associated with leverage-point measurements. Results of the proposed estimator on IEEE 39-bus system and a 24-bus EHV equivalent system of the southern Indian grid are presented for illustrative purpose.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We associate a sheaf model to a class of Hilbert modules satisfying a natural finiteness condition. It is obtained as the dual to a linear system of Hermitian vector spaces (in the sense of Grothendieck). A refined notion of curvature is derived from this construction leading to a new unitary invariant for the Hilbert module. A division problem with bounds, originating in Douady's privilege, is related to this framework. A series of concrete computations illustrate the abstract concepts of the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports the results of employing an artificial bee colony search algorithm for synthesizing a mutually coupled lumped-parameter ladder-network representation of a transformer winding, starting from its measured magnitude frequency response. The existing bee colony algorithm is suitably adopted by appropriately defining constraints, inequalities, and bounds to restrict the search space and thereby ensure synthesis of a nearly unique ladder network corresponding to each frequency response. Ensuring near-uniqueness while constructing the reference circuit (i.e., representation of healthy winding) is the objective. Furthermore, the synthesized circuits must exhibit physical realizability. The proposed method is easy to implement, time efficient, and problems associated with the supply of initial guess in existing methods are circumvented. Experimental results are reported on two types of actual, single, and isolated transformer windings (continuous disc and interleaved disc).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given two independent Poisson point processes Phi((1)), Phi((2)) in R-d, the AB Poisson Boolean model is the graph with the points of Phi((1)) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centered at these points contains at least one point of Phi((2)). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all d >= 2 and derive bounds fora critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and tau n in the unit cube. The AB random geometric graph is defined as above but with balls of radius r. We derive a weak law result for the largest nearest-neighbor distance and almost-sure asymptotic bounds for the connectivity threshold.