199 resultados para DISTRIBUTED-FEEDBACK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a lot of pressure on all the developed and second world countries to produce low emission power and distributed generation (DG) is found to be one of the most viable ways to achieve this. DG generally makes use of renewable energy sources like wind, micro turbines, photovoltaic, etc., which produce power with minimum green house gas emissions. While installing a DG it is important to define its size and optimal location enabling minimum network expansion and line losses. In this paper, a methodology to locate the optimal site for a DG installation, with the objective to minimize the net transmission losses, is presented. The methodology is based on the concept of relative electrical distance (RED) between the DG and the load points. This approach will help to identify the new DG location(s), without the necessity to conduct repeated power flows. To validate this methodology case studies are carried out on a 20 node, 66kV system, a part of Karnataka Transco and results are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any arbitrary of nodes. However regenerating codes possess in addition, the ability to repair a failed node by connecting to any arbitrary nodes and downloading an amount of data that is typically far less than the size of the data file. This amount of download is termed the repair bandwidth. Minimum storage regenerating (MSR) codes are a subclass of regenerating codes that require the least amount of network storage; every such code is a maximum distance separable (MDS) code. Further, when a replacement node stores data identical to that in the failed node, the repair is termed as exact. The four principal results of the paper are (a) the explicit construction of a class of MDS codes for d = n - 1 >= 2k - 1 termed the MISER code, that achieves the cut-set bound on the repair bandwidth for the exact repair of systematic nodes, (b) proof of the necessity of interference alignment in exact-repair MSR codes, (c) a proof showing the impossibility of constructing linear, exact-repair MSR codes for d < 2k - 3 in the absence of symbol extension, and (d) the construction, also explicit, of high-rate MSR codes for d = k+1. Interference alignment (IA) is a theme that runs throughout the paper: the MISER code is built on the principles of IA and IA is also a crucial component to the nonexistence proof for d < 2k - 3. To the best of our knowledge, the constructions presented in this paper are the first explicit constructions of regenerating codes that achieve the cut-set bound.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present global multidimensional numerical simulations of the plasma that pervades the dark matter haloes of clusters, groups and massive galaxies (the intracluster medium; ICM). Observations of clusters and groups imply that such haloes are roughly in global thermal equilibrium, with heating balancing cooling when averaged over sufficiently long time- and length-scales; the ICM is, however, very likely to be locally thermally unstable. Using simple observationally motivated heating prescriptions, we show that local thermal instability (TI) can produce a multiphase medium with similar to 104 K cold filaments condensing out of the hot ICM only when the ratio of the TI time-scale in the hot plasma (tTI) to the free-fall time-scale (tff) satisfies tTI/tff? 10. This criterion quantitatively explains why cold gas and star formation are preferentially observed in low-entropy clusters and groups. In addition, the interplay among heating, cooling and TI reduces the net cooling rate and the mass accretion rate at small radii by factors of similar to 100 relative to cooling-flow models. This dramatic reduction is in line with observations. The feedback efficiency required to prevent a cooling flow is similar to 10-3 for clusters and decreases for lower mass haloes; supernova heating may be energetically sufficient to balance cooling in galactic haloes. We further argue that the ICM self-adjusts so that tTI/tff? 10 at all radii. When this criterion is not satisfied, cold filaments condense out of the hot phase and reduce the density of the ICM. These cold filaments can power the black hole and/or stellar feedback required for global thermal balance, which drives tTI/tff? 10. In comparison to clusters, groups have central cores with lower densities and larger radii. This can account for the deviations from self-similarity in the X-ray luminositytemperature () relation. The high-velocity clouds observed in the Galactic halo can be due to local TI producing multiphase gas close to the virial radius if the density of the hot plasma in the Galactic halo is >rsim 10-5 cm-3 at large radii.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiwavelength data indicate that the X-ray-emitting plasma in the cores of galaxy clusters is not cooling catastrophically. To a large extent, cooling is offset by heating due to active galactic nuclei (AGNs) via jets. The cool-core clusters, with cooler/denser plasmas, show multiphase gas and signs of some cooling in their cores. These observations suggest that the cool core is locally thermally unstable while maintaining global thermal equilibrium. Using high-resolution, three-dimensional simulations we study the formation of multiphase gas in cluster cores heated by collimated bipolar AGN jets. Our key conclusion is that spatially extended multiphase filaments form only when the instantaneous ratio of the thermal instability and free-fall timescales (t(TI)/t(ff)) falls below a critical threshold of approximate to 10. When this happens, dense cold gas decouples from the hot intracluster medium (ICM) phase and generates inhomogeneous and spatially extended Ha filaments. These cold gas clumps and filaments ``rain'' down onto the central regions of the core, forming a cold rotating torus and in part feeding the supermassive black hole. Consequently, the self-regulated feedback enhances AGN heating and the core returns to a higher entropy level with t(TI)/t(ff) > 10. Eventually, the core reaches quasi-stable global thermal equilibrium, and cold filaments condense out of the hot ICM whenever t(TI)/t(ff) less than or similar to 10. This occurs despite the fact that the energy from AGN jets is supplied to the core in a highly anisotropic fashion. The effective spatial redistribution of heat is enabled in part by the turbulent motions in the wake of freely falling cold filaments. Increased AGN activity can locally reverse the cold gas flow, launching cold filamentary gas away from the cluster center. Our criterion for the condensation of spatially extended cold gas is in agreement with observations and previous idealized simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clock synchronisation is an important requirement for various applications in wireless sensor networks (WSNs). Most of the existing clock synchronisation protocols for WSNs use some hierarchical structure that introduces an extra overhead due to the dynamic nature of WSNs. Besides, it is difficult to integrate these clock synchronisation protocols with sleep scheduling scheme, which is a major technique to conserve energy. In this paper, we propose a fully distributed peer-to-peer based clock synchronisation protocol, named Distributed Clock Synchronisation Protocol (DCSP), using a novel technique of pullback for complete sensor networks. The pullback technique ensures that synchronisation phases of any pair of clocks always overlap. We have derived an exact expression for a bound on maximum synchronisation error in the DCSP protocol, and simulation study verifies that it is indeed less than the computed upper bound. Experimental study using a few TelosB motes also verifies that the pullback occurs as predicted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the wireless two-way relay channel, in which two-way data transfer takes place between the end nodes with the help of a relay. For the Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et al. that adaptively changing the network coding map used at the relay greatly reduces the impact of Multiple Access Interference at the relay. The harmful effect of the deep channel fade conditions can be effectively mitigated by proper choice of these network coding maps at the relay. Alternatively, in this paper we propose a Distributed Space Time Coding (DSTC) scheme, which effectively removes most of the deep fade channel conditions at the transmitting nodes itself without any CSIT and without any need to adaptively change the network coding map used at the relay. It is shown that the deep fades occur when the channel fade coefficient vector falls in a finite number of vector subspaces of, which are referred to as the singular fade subspaces. DSTC design criterion referred to as the singularity minimization criterion under which the number of such vector subspaces are minimized is obtained. Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit low decoding complexity DSTC designs which satisfy the singularity minimization criterion and maximize the coding gain for QAM and PSK signal sets are provided. Simulation results show that at high Signal to Noise Ratio, the DSTC scheme provides large gains when compared to the conventional Exclusive OR network code and performs better than the adaptive network coding scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regenerating codes are a class of codes for distributed storage networks that provide reliability and availability of data, and also perform efficient node repair. Another important aspect of a distributed storage network is its security. In this paper, we consider a threat model where an eavesdropper may gain access to the data stored in a subset of the storage nodes, and possibly also, to the data downloaded during repair of some nodes. We provide explicit constructions of regenerating codes that achieve information-theoretic secrecy capacity in this setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers sequential hypothesis testing in a decentralized framework. We start with two simple decentralized sequential hypothesis testing algorithms. One of which is later proved to be asymptotically Bayes optimal. We also consider composite versions of decentralized sequential hypothesis testing. A novel nonparametric version for decentralized sequential hypothesis testing using universal source coding theory is developed. Finally we design a simple decentralized multihypothesis sequential detection algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, Guo and Xia introduced low complexity decoders called Partial Interference Cancellation (PIC) and PIC with Successive Interference Cancellation (PIC-SIC), which include the Zero Forcing (ZF) and ZF-SIC receivers as special cases, for point-to-point MIMO channels. In this paper, we show that PIC and PIC-SIC decoders are capable of achieving the full cooperative diversity available in wireless relay networks. We give sufficient conditions for a Distributed Space-Time Block Code (DSTBC) to achieve full diversity with PIC and PIC-SIC decoders and construct a new class of DSTBCs with low complexity full-diversity PIC-SIC decoding using complex orthogonal designs. The new class of codes includes a number of known full-diversity PIC/PIC-SIC decodable Space-Time Block Codes (STBCs) constructed for point-to-point channels as special cases. The proposed DSTBCs achieve higher rates (in complex symbols per channel use) than the multigroup ML decodable DSTBCs available in the literature. Simulation results show that the proposed codes have better bit error rate performance than the best known low complexity, full-diversity DSTBCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Channel-aware assignment of sub-channels to users in the downlink of an OFDMA system demands extensive feedback of channel state information (CSI) to the base station. Since the feedback bandwidth is often very scarce, schemes that limit feedback are necessary. We develop a novel, low feedback splitting-based algorithm for assigning each sub-channel to its best user, i.e., the user with the highest gain for that sub-channel among all users. The key idea behind the algorithm is that, at any time, each user contends for the sub-channel on which it has the largest channel gain among the unallocated sub-channels. Unlike other existing schemes, the algorithm explicitly handles multiple access control aspects associated with the feedback of CSI. A tractable asymptotic analysis of a system with a large number of users helps design the algorithm. It yields 50% to 65% throughput gains compared to an asymptotically optimal one-bit feedback scheme, when the number of users is as small as 10 or as large as 1000. The algorithm is fast and distributed, and scales with the number of users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider cooperative spectrum sensing for cognitive radios. We develop an energy efficient detector with low detection delay using sequential hypothesis testing. Sequential Probability Ratio Test (SPRT) is used at both the local nodes and the fusion center. We also analyse the performance of this algorithm and compare with the simulations. Modelling uncertainties in the distribution parameters are considered. Slow fading with and without perfect channel state information at the cognitive radios is taken into account.