904 resultados para Lipschitzian bounds
Resumo:
A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network- errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.
Resumo:
In this paper, we consider a robust design of MIMO-relay precoder and receive filter for the destination nodes in a non-regenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a single MIMO-relay node. The source and destination nodes are single antenna nodes, whereas the MIMO-relay node has multiple transmit and multiple receive antennas. The channel state information (CSI) available at the MIMO-relay node for precoding purpose is assumed to be imperfect. We assume that the norms of errors in CSI are upper-bounded, and the MIMO-relay node knows these bounds. We consider the robust design of the MIMO-relay precoder and receive filter based on the minimization of the total MIMO-relay transmit power with constraints on the mean square error (MSE) at the destination nodes. We show that this design problem can be solved by solving an alternating sequence of minimization and worst-case analysis problems. The minimization problem is formulated as a convex optimization problem that can be solved efficiently using interior-point methods. The worst-case analysis problem can be solved analytically using an approximation for the MSEs at the destination nodes. We demonstrate the robust performance of the proposed design through simulations.
Resumo:
In this paper, we address the fundamental question concerning the limits on the network lifetime in sensor networks when multiple base stations (BSs) are deployed as data sinks. Specifically, we derive upper bounds on the network lifetime when multiple BSs arc employed, and obtain optimum locations of the base stations that maximise these lifetime bounds. For the case of two BSs, we jointly optimise the BS locations by maximising the lifetime bound using genetic algorithm. Joint optimisation for more number of BSs becomes prohibitively complex. Further, we propose a suboptimal approach for higher number of BSs, Individually Optimum method, where we optimise the next BS location using optimum location of previous BSs. Individually Optimum method has advantage of being attractive for solving the problem with more number of BSs at the cost of little compromised accuracy. We show that accuracy degradation is quite small for the case of three BSs.
Resumo:
Consider a sequence of closed, orientable surfaces of fixed genus g in a Riemannian manifold M with uniform upper bounds on the norm of mean curvature and area. We show that on passing to a subsequence, we can choose parametrisations of the surfaces by inclusion maps from a fixed surface of the same genus so that the distance functions corresponding to the pullback metrics converge to a pseudo-metric and the inclusion maps converge to a Lipschitz map. We show further that the limiting pseudo-metric has fractal dimension two. As a corollary, we obtain a purely geometric result. Namely, we show that bounds on the mean curvature, area and genus of a surface F subset of M, together with bounds on the geometry of M, give an upper bound on the diameter of F. Our proof is modelled on Gromov's compactness theorem for J-holomorphic curves.
Resumo:
The purpose of this paper is to present exergy charts for carbon dioxide (CO2) based on the new fundamental equation of state and the results of a thermodynamic analysis of conventional and trans-critical vapour compression refrigeration cycles using the data thereof. The calculation scheme is anchored on the Mathematica platform. There exist upper and lower bounds for the high cycle pressure for a given set of evaporating and pre-throttling temperatures. The maximum possible exergetic efficiency for each case was determined. Empirical correlations for exergetic efficiency and COP, valid in the range of temperatures studied here, are obtained. The exergy losses have been quantified. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The boxicity of a graph H, denoted by View the MathML source, is the minimum integer k such that H is an intersection graph of axis-parallel k-dimensional boxes in View the MathML source. In this paper we show that for a line graph G of a multigraph, View the MathML source, where Δ(G) denotes the maximum degree of G. Since G is a line graph, Δ(G)≤2(χ(G)−1), where χ(G) denotes the chromatic number of G, and therefore, View the MathML source. For the d-dimensional hypercube Qd, we prove that View the MathML source. The question of finding a nontrivial lower bound for View the MathML source was left open by Chandran and Sivadasan in [L. Sunil Chandran, Naveen Sivadasan, The cubicity of Hypercube Graphs. Discrete Mathematics 308 (23) (2008) 5795–5800]. The above results are consequences of bounds that we obtain for the boxicity of a fully subdivided graph (a graph that can be obtained by subdividing every edge of a graph exactly once).
Resumo:
We consider the problem of compression of a non-Abelian source.This is motivated by the problem of distributed function computation,where it is known that if one is only interested in computing a function of several sources, then one can often improve upon the compression rate required by the Slepian-Wolf bound. Let G be a non-Abelian group having center Z(G). We show here that it is impossible to compress a source with symbols drawn from G when Z(G) is trivial if one employs a homomorphic encoder and a typical-set decoder.We provide achievable upper bounds on the minimum rate required to compress a non-Abelian group with non-trivial center. Also, in a two source setting, we provide achievable upper bounds for compression of any non-Abelian group, using a non-homomorphic encoder.
Resumo:
The e�cient operation of single-source, single-sink wireless network is considered with the diversity-multiplexing gain tradeo� (DMT) as the measure of performance. Whereas in the case of a point-to-point MIMO channel the DMT is determined by the fading statistics, in the case of a network, the DMT is additionally, a function of the time schedule according to which the network is operated, as well as the protocol that dictates the mode of operation of the intermediate relays.In general, it is only possible at present, to provide upper bounds on the DMT of the network in terms of the DMT of the MIMO channel appearing across cuts in the network. This paper presents a tutorial overview on the DMT of half-duplex multi-hop wireless networks that also attempts to identify where possible, codes that achieve the DMT.For example, it is shown how one can construct codes that achieve the DMT of a network under a given schedule and either an amplify-and-forward or decode-and-forward protocol. Also contained in the paper,are discussions on the DMT of the multiple-access channel as well as the impact of feedback on the DMT of a MIMO channel.
Resumo:
Given two independent Poisson point processes ©(1);©(2) in Rd, the AB Poisson Boolean model is the graph with points of ©(1) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centred at these points contains at least one point of ©(2). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all d ¸ 2 and derive bounds for a critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and cn in the unit cube. The AB random geometric graph is de¯ned as above but with balls of radius r. We derive a weak law result for the largest nearest neighbour distance and almost sure asymptotic bounds for the connectivity threshold.
Resumo:
Two families of low correlation QAM sequences are presented here. In a CDMA setting, these sequences have the ability to transport a large amount of data as well as enable variable-rate signaling on the reverse link. The first family Á2SQ - B2− is constructed by interleaving 2 selected QAM sequences. This family is defined over M 2-QAM, where M = 2 m , m ≥ 2. Over 16-QAM, the normalized maximum correlation [`(q)]maxmax is bounded above by <~1.17 ÖNUnknown control sequence '\lesssim' , where N is the period of the sequences in the family. This upper bound on [`(q)]maxmax is the lowest among all known sequence families over 16-QAM.The second family Á4SQ4 is constructed by interleaving 4 selected QAM sequences. This family is defined over M 2-QAM, where M = 2 m , m ≥ 3, i.e., 64-QAM and beyond. The [`(q)]maxmax for sequences in this family over 64-QAM is upper bounded by <~1.60 ÖNUnknown control sequence '\lesssim' . For large M, [`(q)]max <~1.64 ÖNUnknown control sequence '\lesssim' . These upper bounds on [`(q)]maxmax are the lowest among all known sequence families over M 2-QAM, M = 2 m , m ≥ 3.
Resumo:
We consider single-source single-sink (ss-ss) multi-hop relay networks, with slow-fading links and single-antenna half-duplex relay nodes. While two-hop cooperative relay networks have been studied in great detail in terms of the diversity-multiplexing tradeoff (DMT), few results are available for more general networks. In this paper, we identify two families of networks that are multi-hop generalizations of the two-hop network: K-Parallel-Path (KPP)networks and layered networks.KPP networks, can be viewed as the union of K node-disjoint parallel relaying paths, each of length greater than one. KPP networks are then generalized to KPP(I) networks, which permit interference between paths and to KPP(D) networks, which possess a direct link from source to sink. We characterize the DMT of these families of networks completely for K > 3. Layered networks are networks comprising of layers of relays with edges existing only between adjacent layers, with more than one relay in each layer. We prove that a linear DMT between the maximum diversity dmax and the maximum multiplexing gain of 1 is achievable for single-antenna fully-connected layered networks. This is shown to be equal to the optimal DMT if the number of relaying layers is less than 4.For multiple-antenna KPP and layered networks, we provide an achievable DMT, which is significantly better than known lower bounds for half duplex networks.For arbitrary multi-terminal wireless networks with multiple source-sink pairs, the maximum achievable diversity is shown to be equal to the min-cut between the corresponding source and the sink, irrespective of whether the network has half-duplex or full-duplex relays. For arbitrary ss-ss single-antenna directed acyclic networks with full-duplex relays, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable.Along the way, we derive the optimal DMT of a generalized parallel channel and derive lower bounds for the DMT of triangular channel matrices, which are useful in DMT computation of various protocols. We also give alternative and often simpler proofs of several existing results and show that codes achieving full diversity on a MIMO Rayleigh fading channel achieve full diversity on arbitrary fading channels. All protocols in this paper are explicit and use only amplify-and-forward (AF) relaying. We also construct codes with short block-lengths based on cyclic division algebras that achieve the optimal DMT for all the proposed schemes.Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of cooperative networks and that simple AF protocols are often sufficient to attain the optimal DMT
Resumo:
Given an unweighted undirected or directed graph with n vertices, m edges and edge connectivity c, we present a new deterministic algorithm for edge splitting. Our algorithm splits-off any specified subset S of vertices satisfying standard conditions (even degree for the undirected case and in-degree ≥ out-degree for the directed case) while maintaining connectivity c for vertices outside S in Õ(m+nc2) time for an undirected graph and Õ(mc) time for a directed graph. This improves the current best deterministic time bounds due to Gabow [8], who splits-off a single vertex in Õ(nc2+m) time for an undirected graph and Õ(mc) time for a directed graph. Further, for appropriate ranges of n, c, |S| it improves the current best randomized bounds due to Benczúr and Karger [2], who split-off a single vertex in an undirected graph in Õ(n2) Monte Carlo time. We give two applications of our edge splitting algorithms. Our first application is a sub-quadratic (in n) algorithm to construct Edmonds' arborescences. A classical result of Edmonds [5] shows that an unweighted directed graph with c edge-disjoint paths from any particular vertex r to every other vertex has exactly c edge-disjoint arborescences rooted at r. For a c edge connected unweighted undirected graph, the same theorem holds on the digraph obtained by replacing each undirected edge by two directed edges, one in each direction. The current fastest construction of these arborescences by Gabow [7] takes Õ(n2c2) time. Our algorithm takes Õ(nc3+m) time for the undirected case and Õ(nc4+mc) time for the directed case. The second application of our splitting algorithm is a new Steiner edge connectivity algorithm for undirected graphs which matches the best known bound of Õ(nc2 + m) time due to Bhalgat et al [3]. Finally, our algorithm can also be viewed as an alternative proof for existential edge splitting theorems due to Lovász [9] and Mader [11].
Resumo:
We present two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm takes O(m 1/2) amortized time per arc and our second algorithm takes O(n 2.5/m) amortized time per arc, where n is the number of vertices and m is the total number of arcs. For sparse graphs, our O(m 1/2) bound improves the best previous bound by a factor of logn and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the two-way search method of previous algorithms does not require an ordered search, but can be more general, allowing us to avoid the use of heaps (priority queues). Instead, the deterministic version of our algorithm uses (approximate) median-finding; the randomized version of our algorithm uses uniform random sampling. For dense graphs, our O(n 2.5/m) bound improves the best previously published bound by a factor of n 1/4 and a recent bound obtained independently of our work by a factor of logn. Our main insight is that graph search is wasteful when the graph is dense and can be avoided by searching the topological order space instead. Our algorithms extend to the maintenance of strong components, in the same asymptotic time bounds.
Resumo:
Accurate system planning and performance evaluation requires knowledge of the joint impact of scheduling, interference, and fading. However, current analyses either require costly numerical simulations or make simplifying assumptions that limit the applicability of the results. In this paper, we derive analytical expressions for the spectral efficiency of cellular systems that use either the channel-unaware but fair round robin scheduler or the greedy, channel-aware but unfair maximum signal to interference ratio scheduler. As is the case in real deployments, non-identical co-channel interference at each user, both Rayleigh fading and lognormal shadowing, and limited modulation constellation sizes are accounted for in the analysis. We show that using a simple moment generating function-based lognormal approximation technique and an accurate Gaussian-Q function approximation leads to results that match simulations well. These results are more accurate than erstwhile results that instead used the moment-matching Fenton-Wilkinson approximation method and bounds on the Q function. The spectral efficiency of cellular systems is strongly influenced by the channel scheduler and the small constellation size that is typically used in third generation cellular systems.
Resumo:
An elementary combinatorial Tanner graph construction for a family of near-regular low density parity check (LDPC) codes achieving high girth is presented. These codes are near regular in the sense that the degree of a left/right vertex is allowed to differ by at most one from the average. The construction yields in quadratic time complexity an asymptotic code family with provable lower bounds on the rate and the girth for a given choice of block length and average degree. The construction gives flexibility in the choice of design parameters of the code like rate, girth and average degree. Performance simulations of iterative decoding algorithm for the AWGN channel on codes designed using the method demonstrate that these codes perform better than regular PEG codes and MacKay codes of similar length for all values of Signal to noise ratio.