58 resultados para Bayes Theorem


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effects of multiple scattering on acoustic manipulation of spherical particles using helicoidal Bessel-beams are discussed. A closed-form analytical solution is developed to calculate the acoustic radiation force resulting from a Bessel-beam on an acoustically reflective sphere, in the presence of an adjacent spherical particle, immersed in an unbounded fluid medium. The solution is based on the standard Fourier decomposition method and the effect of multi-scattering is taken into account using the addition theorem for spherical coordinates. Of particular interest here is the investigation of the effects of multiple scattering on the emergence of negative axial forces. To investigate the effects, the radiation force applied on the target particle resulting from a helicoidal Bessel-beam of different azimuthal indexes (m = 1 to 4), at different conical angles, is computed. Results are presented for soft and rigid spheres of various sizes, separated by a finite distance. Results have shown that the emergence of negative force regions is very sensitive to the level of cross-scattering between the particles. It has also been shown that in multiple scattering media, the negative axial force may occur at much smaller conical angles than previously reported for single particles, and that acoustic manipulation of soft spheres in such media may also become possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Leading edge vortices are considered to be important in generating the high lift coefficients observed in insect flight and may therefore be relevant to micro-air vehicles. A potential flow model of an impulsively started flat plate, featuring a leading edge vortex (LEV) and a trailing edge vortex (TEV) is fitted to experimental data in order to provide insight into the mechanisms that influence the convection of the LEV and to study how the LEV contributes to lift. The potential flow model fits the experimental data best with no bound circulation, which is in accordance with Kelvin's circulation theorem. The lift-to-drag ratio is well approximated by the function 'cot α' for α > 15°, which supports the tentative conclusion that shortly after an impulsive start, at post-stall angles of attack, lift is caused non-circulatory forces and by the action of the LEV as opposed to bound circulation. Copyright © 2012 by C. W. Pitt Ford.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantile regression refers to the process of estimating the quantiles of a conditional distribution and has many important applications within econometrics and data mining, among other domains. In this paper, we show how to estimate these conditional quantile functions within a Bayes risk minimization framework using a Gaussian process prior. The resulting non-parametric probabilistic model is easy to implement and allows non-crossing quantile functions to be enforced. Moreover, it can directly be used in combination with tools and extensions of standard Gaussian Processes such as principled hyperparameter estimation, sparsification, and quantile regression with input-dependent noise rates. No existing approach enjoys all of these desirable properties. Experiments on benchmark datasets show that our method is competitive with state-of-the-art approaches. © 2009 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An engineer assessing the load-carrying capacity of an existing reinforced concrete slab is likely to use elastic analysis to check the load at which the structure might be expected to fail in flexure or in shear. In practice, many reinforced concrete slabs are highly ductile in flexure, so an elastic analysis greatly underestimates the loads at which they fail in this mode. The use of conservative elastic analysis has led engineers to incorrectly condemn many slabs and therefore to specify unnecessary and wasteful flexural strengthening or replacement. The lower bound theorem is based on the same principles as the upper bound theorem used in yield line analysis, but any solution that rigorously satisfies the lower bound theorem is guaranteed to be a safe underestimate of the collapse load. Jackson presented a rigorous lower bound method that obtains very accurate results for complex real slabs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flapping wings often feature a leading-edge vortex (LEV) that is thought to enhance the lift generated by the wing. Here the lift on a wing featuring a leading-edge vortex is considered by performing experiments on a translating flat-plate aerofoil that is accelerated from rest in a water towing tank at a fixed angle of attack of 15°. The unsteady flow is investigated with dye flow visualization, particle image velocimetry (PIV) and force measurements. Leading-and trailing-edge vortex circulation and position are calculated directly from the velocity vectors obtained using PIV. In order to determine the most appropriate value of bound circulation, a two-dimensional potential flow model is employed and flow fields are calculated for a range of values of bound circulation. In this way, the value of bound circulation is selected to give the best fit between the experimental velocity field and the potential flow field. Early in the trajectory, the value of bound circulation calculated using this potential flow method is in accordance with Kelvin's circulation theorem, but differs from the values predicted by Wagner's growth of bound circulation and the Kutta condition. Later the Kutta condition is established but the bound circulation remains small; most of the circulation is contained instead in the LEVs. The growth of wake circulation can be approximated by Wagner's circulation curve. Superimposing the non-circulatory lift, approximated from the potential flow model, and Wagner's lift curve gives a first-order approximation of the measured lift. Lift is generated by inertial effects and the slow buildup of circulation, which is contained in shed vortices rather than bound circulation. © 2013 Cambridge University Press.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based on Gibbs sampling and one based on variational Bayes. Importantly, these algorithms may be implemented in the factorization of very large matrices with missing entries. The model is evaluated on a collaborative filtering task, where users have rated a collection of movies and the system is asked to predict their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling outperforms variational Bayes on this task, despite the large number of ratings and model parameters. Matlab implementations of the proposed algorithms are available from cogsys.imm.dtu.dk/ordinalmatrixfactorization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The information provided by the in-cylinder pressure signal is of great importance for modern engine management systems. The obtained information is implemented to improve the control and diagnostics of the combustion process in order to meet the stringent emission regulations and to improve vehicle reliability and drivability. The work presented in this paper covers the experimental study and proposes a comprehensive and practical solution for the estimation of the in-cylinder pressure from the crankshaft speed fluctuation. Also, the paper emphasizes the feasibility and practicality aspects of the estimation techniques, for the real-time online application. In this study an engine dynamics model based estimation method is proposed. A discrete-time transformed form of a rigid-body crankshaft dynamics model is constructed based on the kinetic energy theorem, as the basis expression for total torque estimation. The major difficulties, including load torque estimation and separation of pressure profile from adjacent-firing cylinders, are addressed in this work and solutions to each problem are given respectively. The experimental results conducted on a multi-cylinder diesel engine have shown that the proposed method successfully estimate a more accurate cylinder pressure over a wider range of crankshaft angles. Copyright © 2012 SAE International.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of positive observer design for positive systems defined on solid cones in Banach spaces. The design is based on the Hilbert metric and convergence properties are analyzed in the light of the Birkhoff theorem. Two main applications are discussed: positive observers for systems defined in the positive orthant, and positive observers on the cone of positive semi-definite matrices with a view on quantum systems. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Convergence analysis of consensus algorithms is revisited in the light of the Hilbert distance. The Lyapunov function used in the early analysis by Tsitsiklis is shown to be the Hilbert distance to consensus in log coordinates. Birkhoff theorem, which proves contraction of the Hilbert metric for any positive homogeneous monotone map, provides an early yet general convergence result for consensus algorithms. Because Birkhoff theorem holds in arbitrary cones, we extend consensus algorithms to the cone of positive definite matrices. The proposed generalization finds applications in the convergence analysis of quantum stochastic maps, which are a generalization of stochastic maps to non-commutative probability spaces. ©2010 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lyapunov's second theorem is an essential tool for stability analysis of differential equations. The paper provides an analog theorem for incremental stability analysis by lifting the Lyapunov function to the tangent bundle. The Lyapunov function endows the state-space with a Finsler structure. Incremental stability is inferred from infinitesimal contraction of the Finsler metrics through integration along solutions curves. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.