868 resultados para Probabilistic Error Correction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are many methods for decomposing signals into a sum of amplitude and frequency modulated sinusoids. In this paper we take a new estimation based approach. Identifying the problem as ill-posed, we show how to regularize the solution by imposing soft constraints on the amplitude and phase variables of the sinusoids. Estimation proceeds using a version of Kalman smoothing. We evaluate the method on synthetic and natural, clean and noisy signals, showing that it outperforms previous decompositions, but at a higher computational cost. © 2012 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A wide area and error free ultra high frequency (UHF) radio frequency identification (RFID) interrogation system based on the use of multiple antennas used in cooperation to provide high quality ubiquitous coverage, is presented. The system uses an intelligent distributed antenna system (DAS) whereby two or more spatially separated transmit and receive antenna pairs are used to allow greatly improved multiple tag identification performance over wide areas. The system is shown to increase the read accuracy of 115 passive UHF RFID tags to 100% from <60% over a 10m x 8m open plan office area. The returned signal strength of the tag backscatter signals is also increased by an average of 10dB and 17dB over an area of 10m x 8m and 10m x 4m respectively. Furthermore, it is shown that the DAS RFID system has improved immunity to tag orientation. Finally, the new system is also shown to increase the tag read speed/rate of a population of tags compared with a conventional RFID system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman's coalescent, Dirichlet diffusion trees and Wishart processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A key function of the brain is to interpret noisy sensory information. To do so optimally, observers must, in many tasks, take into account knowledge of the precision with which stimuli are encoded. In an orientation change detection task, we find that encoding precision does not only depend on an experimentally controlled reliability parameter (shape), but also exhibits additional variability. In spite of variability in precision, human subjects seem to take into account precision near-optimally on a trial-to-trial and item-to-item basis. Our results offer a new conceptualization of the encoding of sensory information and highlight the brain's remarkable ability to incorporate knowledge of uncertainty during complex perceptual decision-making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amplitude demodulation is an ill-posed problem and so it is natural to treat it from a Bayesian viewpoint, inferring the most likely carrier and envelope under probabilistic constraints. One such treatment is Probabilistic Amplitude Demodulation (PAD), which, whilst computationally more intensive than traditional approaches, offers several advantages. Here we provide methods for estimating the uncertainty in the PAD-derived envelopes and carriers, and for learning free-parameters like the time-scale of the envelope. We show how the probabilistic approach can naturally handle noisy and missing data. Finally, we indicate how to extend the model to signals which contain multiple modulators and carriers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Auditory scene analysis is extremely challenging. One approach, perhaps that adopted by the brain, is to shape useful representations of sounds on prior knowledge about their statistical structure. For example, sounds with harmonic sections are common and so time-frequency representations are efficient. Most current representations concentrate on the shorter components. Here, we propose representations for structures on longer time-scales, like the phonemes and sentences of speech. We decompose a sound into a product of processes, each with its own characteristic time-scale. This demodulation cascade relates to classical amplitude demodulation, but traditional algorithms fail to realise the representation fully. A new approach, probabilistic amplitude demodulation, is shown to out-perform the established methods, and to easily extend to representation of a full demodulation cascade.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recent trend in spoken dialogue research is the use of reinforcement learning to train dialogue systems in a simulated environment. Past researchers have shown that the types of errors that are simulated can have a significant effect on simulated dialogue performance. Since modern systems typically receive an N-best list of possible user utterances, it is important to be able to simulate a full N-best list of hypotheses. This paper presents a new method for simulating such errors based on logistic regression, as well as a new method for simulating the structure of N-best lists of semantics and their probabilities, based on the Dirichlet distribution. Off-line evaluations show that the new Dirichlet model results in a much closer match to the receiver operating characteristics (ROC) of the live data. Experiments also show that the logistic model gives confusions that are closer to the type of confusions observed in live situations. The hope is that these new error models will be able to improve the resulting performance of trained dialogue systems. © 2012 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based on Gibbs sampling and one based on variational Bayes. Importantly, these algorithms may be implemented in the factorization of very large matrices with missing entries. The model is evaluated on a collaborative filtering task, where users have rated a collection of movies and the system is asked to predict their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling outperforms variational Bayes on this task, despite the large number of ratings and model parameters. Matlab implementations of the proposed algorithms are available from cogsys.imm.dtu.dk/ordinalmatrixfactorization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In geotechnical engineering, soil classification is an essential component in the design process. Field methods such as the cone penetration test (CPT) can be used as less expensive and faster alternatives to sample retrieval and testing. Unfortunately, current soil classification charts based on CPT data and laboratory measurements are too generic, and may not provide an accurate prediction of the soil type. A probabilistic approach is proposed here to update and modify soil identification charts based on site-specific CPT data. The probability that a soil is correctly classified is also estimated. The updated identification chart can be used for a more accurate prediction of the classification of the soil, and can account for prior information available before conducting the tests, site-specific data, and measurement errors. As an illustration, the proposed approach is implemented using CPT data from the Treporti Test Site (TTS) near Venice (Italy) and the National Geotechnical Experimentation Sites (NGES) at Texas A&M University. The applicability of the site-specific chart for other sites in Venice Lagoon is assessed using data from the Malamocco test site, approximately 20 km from TTS.