997 resultados para Bayesian reasoning


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The impulse response of a typical wireless multipath channel can be modeled as a tapped delay line filter whose non-zero components are sparse relative to the channel delay spread. In this paper, a novel method of estimating such sparse multipath fading channels for OFDM systems is explored. In particular, Sparse Bayesian Learning (SBL) techniques are applied to jointly estimate the sparse channel and its second order statistics, and a new Bayesian Cramer-Rao bound is derived for the SBL algorithm. Further, in the context of OFDM channel estimation, an enhancement to the SBL algorithm is proposed, which uses an Expectation Maximization (EM) framework to jointly estimate the sparse channel, unknown data symbols and the second order statistics of the channel. The EM-SBL algorithm is able to recover the support as well as the channel taps more efficiently, and/or using fewer pilot symbols, than the SBL algorithm. To further improve the performance of the EM-SBL, a threshold-based pruning of the estimated second order statistics that are input to the algorithm is proposed, and its mean square error and symbol error rate performance is illustrated through Monte-Carlo simulations. Thus, the algorithms proposed in this paper are capable of obtaining efficient sparse channel estimates even in the presence of a small number of pilots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we give a method for probabilistic assignment to the Realistic Abductive Reasoning Model, The knowledge is assumed to be represented in the form of causal chaining, namely, hyper-bipartite network. Hyper-bipartite network is the most generalized form of knowledge representation for which, so far, there has been no way of assigning probability to the explanations, First, the inference mechanism using realistic abductive reasoning model is briefly described and then probability is assigned to each of the explanations so as to pick up the explanations in the decreasing order of plausibility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An intelligent computer aided defect analysis (ICADA) system, based on artificial intelligence techniques, has been developed to identify design, process or material parameters which could be responsible for the occurrence of defective castings in a manufacturing campaign. The data on defective castings for a particular time frame, which is an input to the ICADA system, has been analysed. It was observed that a large proportion, i.e. 50-80% of all the defective castings produced in a foundry, have two, three or four types of defects occurring above a threshold proportion, say 10%. Also, a large number of defect types are either not found at all or found in a very small proportion, with a threshold value below 2%. An important feature of the ICADA system is the recognition of this pattern in the analysis. Thirty casting defect types and a large number of causes numbering between 50 and 70 for each, as identified in the AFS analysis of casting defects-the standard reference source for a casting process-constituted the foundation for building the knowledge base. Scientific rationale underlying the formation of a defect during the casting process was identified and 38 metacauses were coded. Process, material and design parameters which contribute to the metacauses were systematically examined and 112 were identified as rootcauses. The interconnections between defects, metacauses and rootcauses were represented as a three tier structured graph and the handling of uncertainty in the occurrence of events such as defects, metacauses and rootcauses was achieved by Bayesian analysis. The hill climbing search technique, associated with forward reasoning, was employed to recognize one or several root causes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the incentive compatible broadcast (ICB) problem in ad hoc wireless networks with selfish nodes. We design a Bayesian incentive compatible broadcast (BIC-B) protocol to address this problem. VCG mechanism based schemes have been popularly used in the literature to design dominant strategy incentive compatible (DSIC) protocols for ad hoc wireless networks. VCG based mechanisms have two critical limitations: (i) the network is required to be bi-connected, (ii) the resulting protocol is not budget balanced. Our proposed BIC-B protocol overcomes these difficulties. We also prove the optimality of the proposed scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we develop a low-complexity message passing algorithm for joint support and signal recovery of approximately sparse signals. The problem of recovery of strictly sparse signals from noisy measurements can be viewed as a problem of recovery of approximately sparse signals from noiseless measurements, making the approach applicable to strictly sparse signal recovery from noisy measurements. The support recovery embedded in the approach makes it suitable for recovery of signals with same sparsity profiles, as in the problem of multiple measurement vectors (MMV). Simulation results show that the proposed algorithm, termed as JSSR-MP (joint support and signal recovery via message passing) algorithm, achieves performance comparable to that of sparse Bayesian learning (M-SBL) algorithm in the literature, at one order less complexity compared to the M-SBL algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The widely used Bayesian classifier is based on the assumption of equal prior probabilities for all the classes. However, inclusion of equal prior probabilities may not guarantee high classification accuracy for the individual classes. Here, we propose a novel technique-Hybrid Bayesian Classifier (HBC)-where the class prior probabilities are determined by unmixing a supplemental low spatial-high spectral resolution multispectral (MS) data that are assigned to every pixel in a high spatial-low spectral resolution MS data in Bayesian classification. This is demonstrated with two separate experiments-first, class abundances are estimated per pixel by unmixing Moderate Resolution Imaging Spectroradiometer data to be used as prior probabilities, while posterior probabilities are determined from the training data obtained from ground. These have been used for classifying the Indian Remote Sensing Satellite LISS-III MS data through Bayesian classifier. In the second experiment, abundances obtained by unmixing Landsat Enhanced Thematic Mapper Plus are used as priors, and posterior probabilities are determined from the ground data to classify IKONOS MS images through Bayesian classifier. The results indicated that HBC systematically exploited the information from two image sources, improving the overall accuracy of LISS-III MS classification by 6% and IKONOS MS classification by 9%. Inclusion of prior probabilities increased the average producer's and user's accuracies by 5.5% and 6.5% in case of LISS-III MS with six classes and 12.5% and 5.4% in IKONOS MS for five classes considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we derive Hybrid, Bayesian and Marginalized Cramer-Rao lower bounds (HCRB, BCRB and MCRB) for the single and multiple measurement vector Sparse Bayesian Learning (SBL) problem of estimating compressible vectors and their prior distribution parameters. We assume the unknown vector to be drawn from a compressible Student-prior distribution. We derive CRBs that encompass the deterministic or random nature of the unknown parameters of the prior distribution and the regression noise variance. We extend the MCRB to the case where the compressible vector is distributed according to a general compressible prior distribution, of which the generalized Pareto distribution is a special case. We use the derived bounds to uncover the relationship between the compressibility and Mean Square Error (MSE) in the estimates. Further, we illustrate the tightness and utility of the bounds through simulations, by comparing them with the MSE performance of two popular SBL-based estimators. We find that the MCRB is generally the tightest among the bounds derived and that the MSE performance of the Expectation-Maximization (EM) algorithm coincides with the MCRB for the compressible vector. We also illustrate the dependence of the MSE performance of SBL based estimators on the compressibility of the vector for several values of the number of observations and at different signal powers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the error exponents in Bayesian decentralized spectrum sensing, i.e., the detection of occupancy of the primary spectrum by a cognitive radio, with probability of error as the performance metric. At the individual sensors, the error exponents of a Central Limit Theorem (CLT) based detection scheme are analyzed. At the fusion center, a K-out-of-N rule is employed to arrive at the overall decision. It is shown that, in the presence of fading, for a fixed number of sensors, the error exponents with respect to the number of observations at both the individual sensors as well as at the fusion center are zero. This motivates the development of the error exponent with a certain probability as a novel metric that can be used to compare different detection schemes in the presence of fading. The metric is useful, for example, in answering the question of whether to sense for a pilot tone in a narrow band (and suffer Rayleigh fading) or to sense the entire wide-band signal (and suffer log-normal shadowing), in terms of the error exponent performance. The error exponents with a certain probability at both the individual sensors and at the fusion center are derived, with both Rayleigh as well as log-normal shadow fading. Numerical results are used to illustrate and provide a visual feel for the theoretical expressions obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a J-component series system which is put on Accelerated Life Test (ALT) involving K stress variables. First, a general formulation of ALT is provided for log-location-scale family of distributions. A general stress translation function of location parameter of the component log-lifetime distribution is proposed which can accommodate standard ones like Arrhenius, power-rule, log-linear model, etc., as special cases. Later, the component lives are assumed to be independent Weibull random variables with a common shape parameter. A full Bayesian methodology is then developed by letting only the scale parameters of the Weibull component lives depend on the stress variables through the general stress translation function. Priors on all the parameters, namely the stress coefficients and the Weibull shape parameter, are assumed to be log-concave and independent of each other. This assumption is to facilitate Gibbs sampling from the joint posterior. The samples thus generated from the joint posterior is then used to obtain the Bayesian point and interval estimates of the system reliability at usage condition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a J-component series system which is put on Accelerated Life Test (ALT) involving K stress variables. First, a general formulation of ALT is provided for log-location-scale family of distributions. A general stress translation function of location parameter of the component log-lifetime distribution is proposed which can accommodate standard ones like Arrhenius, power-rule, log-linear model, etc., as special cases. Later, the component lives are assumed to be independent Weibull random variables with a common shape parameter. A full Bayesian methodology is then developed by letting only the scale parameters of the Weibull component lives depend on the stress variables through the general stress translation function. Priors on all the parameters, namely the stress coefficients and the Weibull shape parameter, are assumed to be log-concave and independent of each other. This assumption is to facilitate Gibbs sampling from the joint posterior. The samples thus generated from the joint posterior is then used to obtain the Bayesian point and interval estimates of the system reliability at usage condition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is well known that the impulse response of a wide-band wireless channel is approximately sparse, in the sense that it has a small number of significant components relative to the channel delay spread. In this paper, we consider the estimation of the unknown channel coefficients and its support in OFDM systems using a sparse Bayesian learning (SBL) framework for exact inference. In a quasi-static, block-fading scenario, we employ the SBL algorithm for channel estimation and propose a joint SBL (J-SBL) and a low-complexity recursive J-SBL algorithm for joint channel estimation and data detection. In a time-varying scenario, we use a first-order autoregressive model for the wireless channel and propose a novel, recursive, low-complexity Kalman filtering-based SBL (KSBL) algorithm for channel estimation. We generalize the KSBL algorithm to obtain the recursive joint KSBL algorithm that performs joint channel estimation and data detection. Our algorithms can efficiently recover a group of approximately sparse vectors even when the measurement matrix is partially unknown due to the presence of unknown data symbols. Moreover, the algorithms can fully exploit the correlation structure in the multiple measurements. Monte Carlo simulations illustrate the efficacy of the proposed techniques in terms of the mean-square error and bit error rate performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we address the recovery of block sparse vectors with intra-block correlation, i.e., the recovery of vectors in which the correlated nonzero entries are constrained to lie in a few clusters, from noisy underdetermined linear measurements. Among Bayesian sparse recovery techniques, the cluster Sparse Bayesian Learning (SBL) is an efficient tool for block-sparse vector recovery, with intra-block correlation. However, this technique uses a heuristic method to estimate the intra-block correlation. In this paper, we propose the Nested SBL (NSBL) algorithm, which we derive using a novel Bayesian formulation that facilitates the use of the monotonically convergent nested Expectation Maximization (EM) and a Kalman filtering based learning framework. Unlike the cluster-SBL algorithm, this formulation leads to closed-form EMupdates for estimating the correlation coefficient. We demonstrate the efficacy of the proposed NSBL algorithm using Monte Carlo simulations.