925 resultados para predictive coding
Resumo:
Distributed space time coding for wireless relay networks where the source, the destination and the relays have multiple antennas have been studied by Jing and Hassibi. In this set up, the transmit and the receive signals at different antennas of the same relay are processed and designed independently, even though the antennas are colocated. In this paper, a wireless relay network with single antenna at the source and the destination and two antennas at each of the R relays is considered. In the first phase of the two-phase transmission model, a T -length complex vector is transmitted from the source to all the relays. At each relay, the inphase and quadrature component vectors of the received complex vectors at the two antennas are interleaved before processing them. After processing, in the second phase, a T x 2R matrix codeword is transmitted to the destination. The collection of all such codewords is called Co-ordinate interleaved distributed space-time code (CIDSTC). Compared to the scheme proposed by Jing-Hassibi, for T ges AR, it is shown that while both the schemes give the same asymptotic diversity gain, the CIDSTC scheme gives additional asymptotic coding gain as well and that too at the cost of negligible increase in the processing complexity at the relays.
Resumo:
We consider the problem of distributed joint source-channel coding of correlated Gaussian sources over a Gaussian Multiple Access Channel (MAC). There may be side information at the encoders and/or at the decoder. First we specialize a general result in [16] to obtain sufficient conditions for reliable transmission over a Gaussian MAC. This system does not satisfy the source channel separation. Thus, next we study and compare three joint source channel coding schemes available in literature.
Resumo:
Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical non-linear modeling techniques to assist processor architects in making design decisions and resolving complex trade-offs. We propose a procedure for building accurate non-linear models that consists of the following steps: (i) selection of a small set of representative design points spread across processor design space using latin hypercube sampling, (ii) obtaining performance measures at the selected design points using detailed simulation, (iii) building non-linear models for performance using the function approximation capabilities of radial basis function networks, and (iv) validating the models using an independently and randomly generated set of design points. We evaluate our model building procedure by constructing non-linear performance models for programs from the SPEC CPU2000 benchmark suite with a microarchitectural design space that consists of 9 key parameters. Our results show that the models, built using a relatively small number of simulations, achieve high prediction accuracy (only 2.8% error in CPI estimates on average) across a large processor design space. Our models can potentially replace detailed simulation for common tasks such as the analysis of key microarchitectural trends or searches for optimal processor design points.
Resumo:
In a typical sensor network scenario a goal is to monitor a spatio-temporal process through a number of inexpensive sensing nodes, the key parameter being the fidelity at which the process has to be estimated at distant locations. We study such a scenario in which multiple encoders transmit their correlated data at finite rates to a distant and common decoder. In particular, we derive inner and outer bounds on the rate region for the random field to be estimated with a given mean distortion.
Resumo:
Predictive distribution modelling of Berberis aristata DC, a rare threatened plant with high medicinal values has been done with an aim to understand its potential distribution zones in Indian Himalayan region. Bioclimatic and topographic variables were used to develop the distribution model with the help of three different algorithms viz. GeneticAlgorithm for Rule-set Production (GARP), Bioclim and Maximum entroys(MaxEnt). Maximum entropy has predicted wider potential distribution (10.36%) compared to GARP (4.63%) and Bioclim (2.44%). Validation confirms that these outputs are comparable to the present distribution pattern of the B. atistata. This exercise highlights that this species favours Western Himalaya. However, GARP and MaxEnt's prediction of Eastern Himalayan states (i.e. Arunachal Pradesh, Nagaland and Manipur) are also identified as potential occurrence places require further exploration.
Resumo:
Rate control regulates the instantaneous video bit -rate to maximize a picture quality metric while satisfying channel constraints. Typically, a quality metric such as Peak Signalto-Noise ratio (PSNR) or weighted signal -to-noise ratio(WSNR) is chosen out of convenience. However this metric is not always truly representative of perceptual video quality.Attempts to use perceptual metrics in rate control have been limited by the accuracy of the video quality metrics chosen.Recently, new and improved metrics of subjective quality such as the Video quality experts group's (VQEG) NTIA1 General Video Quality Model (VQM) have been proven to have strong correlation with subjective quality. Here, we apply the key principles of the NTIA -VQM model to rate control in order to maximize perceptual video quality. Our experiments demonstrate that applying NTIA -VQM motivated metrics to standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 / MSE based implementation.
Resumo:
In terabit-density magnetic recording, several bits of data can be replaced by the values of their neighbors in the storage medium. As a result, errors in the medium are dependent on each other and also on the data written. We consider a simple 1-D combinatorial model of this medium. In our model, we assume a setting where binary data is sequentially written on the medium and a bit can erroneously change to the immediately preceding value. We derive several properties of codes that correct this type of errors, focusing on bounds on their cardinality. We also define a probabilistic finite-state channel model of the storage medium, and derive lower and upper estimates of its capacity. A lower bound is derived by evaluating the symmetric capacity of the channel, i.e., the maximum transmission rate under the assumption of the uniform input distribution of the channel. An upper bound is found by showing that the original channel is a stochastic degradation of another, related channel model whose capacity we can compute explicitly.
Resumo:
Problems related to network coding for acyclic, instantaneous networks (where the edges of the acyclic graph representing the network are assumed to have zero-delay) have been extensively dealt with in the recent past. The most prominent of these problems include (a) the existence of network codes that achieve maximum rate of transmission, (b) efficient network code constructions, and (c) field size issues. In practice, however, networks have transmission delays. In network coding theory, such networks with transmission delays are generally abstracted by assuming that their edges have integer delays. Using enough memory at the nodes of an acyclic network with integer delays can effectively simulate instantaneous behavior, which is probably why only acyclic instantaneous networks have been primarily focused on thus far. However, nulling the effect of the network delays are not always uniformly advantageous, as we will show in this work. Essentially, we elaborate on issues ((a), (b) and (c) above) related to network coding for acyclic networks with integer delays, and show that using the delay network as is (without adding memory) turns out to be advantageous, disadvantageous or immaterial, depending on the topology of the network and the problem considered i.e., (a), (b) or (c).
Resumo:
A nonlinear suboptimal guidance law is presented in this paper for successful interception of ground targets by air-launched missiles and guided munitions. The main feature of this guidance law is that it accurately satisfies terminal impact angle constraints in both azimuth as well as elevation simultaneously. In addition, it is capable of hitting the target with high accuracy as well as minimizing the lateral acceleration demand. The guidance law is synthesized using recently developed model predictive static programming (MPSP). Performance of the proposed MPSP guidance is demonstrated using three-dimensional (3-D) nonlinear engagement dynamics by considering stationary, moving, and maneuvering targets. Effectiveness of the proposed guidance has also been verified by considering first. order autopilot lag as well as assuming inaccurate information about target maneuvers. Multiple munitions engagement results are presented as well. Moreover, comparison studies with respect to an augmented proportional navigation guidance (which does not impose impact angle constraints) as well as an explicit linear optimal guidance (which imposes the same impact angle constraints in 3-D) lead to the conclusion that the proposed MPSP guidance is superior to both. A large number of randomized simulation studies show that it also has a larger capture region.
Resumo:
Introduction of processor based instruments in power systems is resulting in the rapid growth of the measured data volume. The present practice in most of the utilities is to store only some of the important data in a retrievable fashion for a limited period. Subsequently even this data is either deleted or stored in some back up devices. The investigations presented here explore the application of lossless data compression techniques for the purpose of archiving all the operational data - so that they can be put to more effective use. Four arithmetic coding methods suitably modified for handling power system steady state operational data are proposed here. The performance of the proposed methods are evaluated using actual data pertaining to the Southern Regional Grid of India. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We consider the wireless two-way relay channel, in which two-way data transfer takes place between the end nodes with the help of a relay. For the Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et al. that adaptively changing the network coding map used at the relay greatly reduces the impact of Multiple Access Interference at the relay. The harmful effect of the deep channel fade conditions can be effectively mitigated by proper choice of these network coding maps at the relay. Alternatively, in this paper we propose a Distributed Space Time Coding (DSTC) scheme, which effectively removes most of the deep fade channel conditions at the transmitting nodes itself without any CSIT and without any need to adaptively change the network coding map used at the relay. It is shown that the deep fades occur when the channel fade coefficient vector falls in a finite number of vector subspaces of, which are referred to as the singular fade subspaces. DSTC design criterion referred to as the singularity minimization criterion under which the number of such vector subspaces are minimized is obtained. Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit low decoding complexity DSTC designs which satisfy the singularity minimization criterion and maximize the coding gain for QAM and PSK signal sets are provided. Simulation results show that at high Signal to Noise Ratio, the DSTC scheme provides large gains when compared to the conventional Exclusive OR network code and performs better than the adaptive network coding scheme.