922 resultados para Best-known bounds
Resumo:
A low complexity, essentially-ML decoding technique for the Golden code and the three antenna Perfect code was introduced by Sirianunpiboon, Howard and Calderbank. Though no theoretical analysis of the decoder was given, the simulations showed that this decoding technique has almost maximum-likelihood (ML) performance. Inspired by this technique, in this paper we introduce two new low complexity decoders for Space-Time Block Codes (STBCs)-the Adaptive Conditional Zero-Forcing (ACZF) decoder and the ACZF decoder with successive interference cancellation (ACZF-SIC), which include as a special case the decoding technique of Sirianunpiboon et al. We show that both ACZF and ACZF-SIC decoders are capable of achieving full-diversity, and we give a set of sufficient conditions for an STBC to give full-diversity with these decoders. We then show that the Golden code, the three and four antenna Perfect codes, the three antenna Threaded Algebraic Space-Time code and the four antenna rate 2 code of Srinath and Rajan are all full-diversity ACZF/ACZF-SIC decodable with complexity strictly less than that of their ML decoders. Simulations show that the proposed decoding method performs identical to ML decoding for all these five codes. These STBCs along with the proposed decoding algorithm have the least decoding complexity and best error performance among all known codes for transmit antennas. We further provide a lower bound on the complexity of full-diversity ACZF/ACZF-SIC decoding. All the five codes listed above achieve this lower bound and hence are optimal in terms of minimizing the ACZF/ACZF-SIC decoding complexity. Both ACZF and ACZF-SIC decoders are amenable to sphere decoding implementation.
Resumo:
This work derives inner and outer bounds on the generalized degrees of freedom (GDOF) of the K-user symmetric MIMO Gaussian interference channel. For the inner bound, an achievable GDOF is derived by employing a combination of treating interference as noise, zero-forcing at the receivers, interference alignment (IA), and extending the Han-Kobayashi (HK) scheme to K users, depending on the number of antennas and the INR/SNR level. An outer bound on the GDOF is derived, using a combination of the notion of cooperation and providing side information to the receivers. Several interesting conclusions are drawn from the bounds. For example, in terms of the achievable GDOF in the weak interference regime, when the number of transmit antennas (M) is equal to the number of receive antennas (N), treating interference as noise performs the same as the HK scheme and is GDOF optimal. For K >; N/M+1, a combination of the HK and IA schemes performs the best among the schemes considered. However, for N/M <; K ≤ N/M+1, the HK scheme is found to be GDOF optimal.
Resumo:
Decoding of linear space-time block codes (STBCs) with sphere-decoding (SD) is well known. A fast-version of the SD known as fast sphere decoding (FSD) has been recently studied by Biglieri, Hong and Viterbo. Viewing a linear STBC as a vector space spanned by its defining weight matrices over the real number field, we define a quadratic form (QF), called the Hurwitz-Radon QF (HRQF), on this vector space and give a QF interpretation of the FSD complexity of a linear STBC. It is shown that the FSD complexity is only a function of the weight matrices defining the code and their ordering, and not of the channel realization (even though the equivalent channel when SD is used depends on the channel realization) or the number of receive antennas. It is also shown that the FSD complexity is completely captured into a single matrix obtained from the HRQF. Moreover, for a given set of weight matrices, an algorithm to obtain a best ordering of them leading to the least FSD complexity is presented. The well known classes of low FSD complexity codes (multi-group decodable codes, fast decodable codes and fast group decodable codes) are presented in the framework of HRQF.
Resumo:
In document community support vector machines and naïve bayes classifier are known for their simplistic yet excellent performance. Normally the feature subsets used by these two approaches complement each other, however a little has been done to combine them. The essence of this paper is a linear classifier, very similar to these two. We propose a novel way of combining these two approaches, which synthesizes best of them into a hybrid model. We evaluate the proposed approach using 20ng dataset, and compare it with its counterparts. The efficacy of our results strongly corroborate the effectiveness of our approach.
Resumo:
The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.
Resumo:
We consider bounds for the capacity region of the Gaussian X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. We first classify the XC into two classes, the strong XC and the mixed XC. In the strong XC, either the direct channels are stronger than the cross channels or vice-versa, whereas in the mixed XC, one of the direct channels is stronger than the corresponding cross channel and vice-versa. After this classification, we give outer bounds on the capacity region for each of the two classes. This is based on the idea that when one of the messages is eliminated from the XC, the rate region of the remaining three messages are enlarged. We make use of the Z channel, a system obtained by eliminating one message and its corresponding channel from the X channel, to bound the rate region of the remaining messages. The outer bound to the rate region of the remaining messages defines a subspace in R-+(4) and forms an outer bound to the capacity region of the XC. Thus, the outer bound to the capacity region of the XC is obtained as the intersection of the outer bounds to the four combinations of the rate triplets of the XC. Using these outer bounds on the capacity region of the XC, we derive new sum-rate outer bounds for both strong and mixed Gaussian XCs and compare them with those existing in literature. We show that the sum-rate outer bound for strong XC gives the sum-rate capacity in three out of the four sub-regions of the strong Gaussian XC capacity region. In case of mixed Gaussian XC, we recover the recent results in 11] which showed that the sum-rate capacity is achieved in two out of the three sub-regions of the mixed XC capacity region and give a simple alternate proof of the same.
Resumo:
We consider the MIMO X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. Both the transmitters and receivers are equipped with multiple antennas. First, we derive an upper bound on the sum-rate capacity of the MIMO XC under individual power constraint at each transmitter. The sum-rate capacity of the two-user multiple access channel (MAC) that results when receiver cooperation is assumed forms an upper bound on the sum-rate capacity of the MIMO XC. We tighten this bound by considering noise correlation between the receivers and deriving the worst noise covariance matrix. It is shown that the worst noise covariance matrix is a saddle-point of a zero-sum, two-player convex-concave game, which is solved through a primal-dual interior point method that solves the maximization and the minimization parts of the problem simultaneously. Next, we propose an achievable scheme which employs dirty paper coding at the transmitters and successive decoding at the receivers. We show that the derived upper bound is close to the achievable region of the proposed scheme at low to medium SNRs.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.
Resumo:
This paper addresses the problem of finding optimal power control policies for wireless energy harvesting sensor (EHS) nodes with automatic repeat request (ARQ)-based packet transmissions. The EHS harvests energy from the environment according to a Bernoulli process; and it is required to operate within the constraint of energy neutrality. The EHS obtains partial channel state information (CSI) at the transmitter through the link-layer ARQ protocol, via the ACK/NACK feedback messages, and uses it to adapt the transmission power for the packet (re)transmission attempts. The underlying wireless fading channel is modeled as a finite state Markov chain with known transition probabilities. Thus, the goal of the power management policy is to determine the best power setting for the current packet transmission attempt, so as to maximize a long-run expected reward such as the expected outage probability. The problem is addressed in a decision-theoretic framework by casting it as a partially observable Markov decision process (POMDP). Due to the large size of the state-space, the exact solution to the POMDP is computationally expensive. Hence, two popular approximate solutions are considered, which yield good power management policies for the transmission attempts. Monte Carlo simulation results illustrate the efficacy of the approach and show that the approximate solutions significantly outperform conventional approaches.
Resumo:
A Finite Feedback Scheme (FFS) for a quasi-static MIMO block fading channel with finite N-ary delay-free noise-free feedback consists of N Space-Time Block Codes (STBCs) at the transmitter, one corresponding to each possible value of feedback, and a function at the receiver that generates N-ary feedback. A number of FFSs are available in the literature that provably attain full-diversity. However, there is no known full-diversity criterion that universally applies to all FFSs. In this paper a universal necessary condition for any FFS to achieve full-diversity is given, and based on this criterion the notion of Feedback-Transmission duration optimal (FT-optimal) FFSs is introduced, which are schemes that use minimum amount of feedback N for the given transmission duration T, and minimum T for the given N to achieve full-diversity. When there is no feedback (N = 1) an FT-optimal scheme consists of a single STBC, and the proposed condition reduces to the well known necessary and sufficient condition for an STBC to achieve full-diversity. Also, a sufficient criterion for full-diversity is given for FFSs in which the component STBC yielding the largest minimum Euclidean distance is chosen, using which full-rate (N-t complex symbols per channel use) full-diversity FT-optimal schemes are constructed for all N-t > 1. These are the first full-rate full-diversity FFSs reported in the literature for T < N-t. Simulation results show that the new schemes have the best error performance among all known FFSs.
Resumo:
We study the diversity order vs rate of an additive white Gaussian noise (AWGN) channel in the whole capacity region. We show that for discrete input as well as for continuous input, Gallager's upper bounds on error probability have exponential diversity in low and high rate region but only subexponential in the mid-rate region. For the best available lower bounds and for the practical codes one observes exponential diversity throughout the capacity region. However we also show that performance of practical codes is close to Gallager's upper bounds and the mid-rate subexponential diversity has a bearing on the performance of the practical codes. Finally we show that the upper bounds with Gaussian input provide good approximation throughout the capacity region even for finite constellation.
Resumo:
In wireless sensor networks (WSNs) the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting at the same time. Such a situation is known as spatially correlated contention. The random access methods to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration and therefore generating an optimal or sub-optimal schedule is not very useful. On the other hand, if the algorithm takes very large time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. To efficiently handle the spatially correlated contention in WSNs, we present a distributed TDMA slot scheduling algorithm, called DTSS algorithm. The DTSS algorithm is designed with the primary objective of reducing the time required to perform scheduling, while restricting the schedule length to maximum degree of interference graph. The algorithm uses randomized TDMA channel access as the mechanism to transmit protocol messages, which bounds the message delay and therefore reduces the time required to get a feasible schedule. The DTSS algorithm supports unicast, multicast and broadcast scheduling, simultaneously without any modification in the protocol. The protocol has been simulated using Castalia simulator to evaluate the run time performance. Simulation results show that our protocol is able to considerably reduce the time required to schedule.
Resumo:
Pore-forming toxins are known for their ability to efficiently form transmembrane pores which eventually leads to cell lysis. The dynamics of lysis and underlying self-assembly or oligomerization pathways leading to pore formation are incompletely understood. In this manuscript the pore-forming kinetics and lysis dynamics of Cytolysin-A (ClyA) toxins on red blood cells (RBCs) are quantified and compared with experimental lysis data. Lysis experiments are carried out on a fixed mass of RBCs, under isotonic conditions in phosphate-buffered saline, for different initial toxin concentrations ranging from 2.94-14.7 nM. Kinetic models which account for monomer binding, conformation and oligomerization to form the dodecameric ClyA pore complex are developed and lysis is assumed to occur when the number of pores per RBC (n(p)) exceeds a critical number, n(pc). By analysing the model in a sublytic regime (n(p) < n(pc)) the number of pores per RBC to initiate lysis is found to lie between 392 and 768 for the sequential oligomerization mechanism and between 5300 and 6300 for the non-sequential mechanism. Rupture rates which are first order in the number of RBCs are seen to provide the best agreement with the lysis experiments. The time constants for pore formation are estimated to lie between 1 and 20 s and monomer conformation time scales were found to be 2-4 times greater than the oligomerization times. Cell rupture takes places in 100s of seconds, and occurs predominantly with a steady number of pores ranging from 515 to 11 000 on the RBC surface for the sequential mechanism. Both the sequential irreversible and non-sequential kinetics provide similar predictions of the hemoglobin release dynamics, however the hemoglobin released as a function of the toxin concentration was accurately captured only with the sequential model. Each mechanism develops a distinct distribution of mers on the surface, providing a unique experimentally observable fingerprint to identify the underlying oligomerization pathways. Our study offers a method to quantify the extent and dynamics of lysis which is an important aspect of developing novel drug and gene delivery strategies based on pore-forming toxins.
Resumo:
Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.
Resumo:
A Monte Carlo filter, based on the idea of averaging over characteristics and fashioned after a particle-based time-discretized approximation to the Kushner-Stratonovich (KS) nonlinear filtering equation, is proposed. A key aspect of the new filter is the gain-like additive update, designed to approximate the innovation integral in the KS equation and implemented through an annealing-type iterative procedure, which is aimed at rendering the innovation (observation prediction mismatch) for a given time-step to a zero-mean Brownian increment corresponding to the measurement noise. This may be contrasted with the weight-based multiplicative updates in most particle filters that are known to precipitate the numerical problem of weight collapse within a finite-ensemble setting. A study to estimate the a-priori error bounds in the proposed scheme is undertaken. The numerical evidence, presently gathered from the assessed performance of the proposed and a few other competing filters on a class of nonlinear dynamic system identification and target tracking problems, is suggestive of the remarkably improved convergence and accuracy of the new filter. (C) 2013 Elsevier B.V. All rights reserved.