50 resultados para Elliptic Variational Inequatilies
Resumo:
Model compensation methods for noise-robust speech recognition have shown good performance. Predictive linear transformations can approximate these methods to balance computational complexity and compensation accuracy. This paper examines both of these approaches from a variational perspective. Using a matched-pair approximation at the component level yields a number of standard forms of model compensation and predictive linear transformations. However, a tighter bound can be obtained by using variational approximations at the state level. Both model-based and predictive linear transform schemes can be implemented in this framework. Preliminary results show that the tighter bound obtained from the state-level variational approach can yield improved performance over standard schemes. © 2011 IEEE.
Resumo:
Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.
Resumo:
The GPML toolbox provides a wide range of functionality for Gaussian process (GP) inference and prediction. GPs are specified by mean and covariance functions; we offer a library of simple mean and covariance functions and mechanisms to compose more complex ones. Several likelihood functions are supported including Gaussian and heavy-tailed for regression as well as others suitable for classification. Finally, a range of inference methods is provided, including exact and variational inference, Expectation Propagation, and Laplace’s method dealing with non-Gaussian likelihoods and FITC for dealing with large regression tasks.
Resumo:
Predictions for a 75x205mm surface semi-elliptic defect in the NESC-1 spinning cylinder test have been made using BS PD 6493:1991, the R6 procedure, non-linear cracked body finite element analysis techniques and the local approach to fracture. All the techniques agree in predicting ductile tearing near the inner surface of the cylinder followed by cleavage initiation. However they differ in the amount of ductile tearing, and the exact location and time of any cleavage event. The amount of ductile tearing decreases with increasing sophistication in the analysis, due to the drop in peak crack driving force and more explicit consideration of constraint effects. The local approach predicts a high probability of cleavage in both HAZ and base material after 190s, while the other predictions suggest that cleavage is unlikely in the HAZ due to constraint loss, but likely in the underlying base material. The timing of this event varies from ∼150s for R6 predictions to ∼250-300s using non-linear cracked body analysis.
Resumo:
As the use of found data increases, more systems are being built using adaptive training. Here transforms are used to represent unwanted acoustic variability, e.g. speaker and acoustic environment changes, allowing a canonical model that models only the "pure" variability of speech to be trained. Adaptive training may be described within a Bayesian framework. By using complexity control approaches to ensure robust parameter estimates, the standard point estimate adaptive training can be justified within this Bayesian framework. However during recognition there is usually no control over the amount of data available. It is therefore preferable to be able to use a full Bayesian approach to applying transforms during recognition rather than the standard point estimates. This paper discusses various approximations to Bayesian approaches including a new variational Bayes approximation. The application of these approaches to state-of-the-art adaptively trained systems using both CAT and MLLR transforms is then described and evaluated on a large vocabulary speech recognition task. © 2005 IEEE.
Resumo:
We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the non-parametric flexibility of Gaussian processes. This model accommodates input dependent signal and noise correlations between multiple response variables, input dependent length-scales and amplitudes, and heavy-tailed predictive distributions. We derive both efficient Markov chain Monte Carlo and variational Bayes inference procedures for this model. We apply GPRN as a multiple output regression and multivariate volatility model, demonstrating substantially improved performance over eight popular multiple output (multi-task) Gaussian process models and three multivariate volatility models on benchmark datasets, including a 1000 dimensional gene expression dataset.
Resumo:
In this paper, phase noise analysis of a mechanical autonomous impact oscillator with a MEMS resonator is performed. Since the circuit considered belongs to the class of hybrid systems, methods based on the variational model for the evaluation of either phase noise or steady state solutions cannot be directly applied. As a matter of fact, the monodromy matrix is not defined at impact events in these systems. By introducing saltation matrices, this limit is overcome and the aforementioned methods are extended. In particular, the unified theory developed by Demir is used to analyze the phase noise after evaluating the asymptotically stable periodic solution of the system by resorting to the shooting method. Numerical results are presented to show how noise sources affect the phase noise performances. © 2011 IEEE.
Resumo:
Using variational methods, we establish conditions for the nonlinear stability of adhesive states between an elastica and a rigid halfspace. The treatment produces coupled criteria for adhesion and buckling instabilities by exploiting classical techniques from Legendre and Jacobi. Three examples that arise in a broad range of engineered systems, from microelectronics to biologically inspired fiber array adhesion, are used to illuminate the stability criteria. The first example illustrates buckling instabilities in adhered rods, while the second shows the instability of a peeling process and the third illustrates the stability of a shear-induced adhesion. The latter examples can also be used to explain how microfiber array adhesives can be activated by shearing and deactivated by peeling. The nonlinear stability criteria developed in this paper are also compared to other treatments. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Pavement condition assessment is essential when developing road network maintenance programs. In practice, the data collection process is to a large extent automated. However, pavement distress detection (cracks, potholes, etc.) is mostly performed manually, which is labor-intensive and time-consuming. Existing methods either rely on complete 3D surface reconstruction, which comes along with high equipment and computation costs, or make use of acceleration data, which can only provide preliminary and rough condition surveys. In this paper we present a method for automated pothole detection in asphalt pavement images. In the proposed method an image is first segmented into defect and non-defect regions using histogram shape-based thresholding. Based on the geometric properties of a defect region the potential pothole shape is approximated utilizing morphological thinning and elliptic regression. Subsequently, the texture inside a potential defect shape is extracted and compared with the texture of the surrounding non-defect pavement in order to determine if the region of interest represents an actual pothole. This methodology has been implemented in a MATLAB prototype, trained and tested on 120 pavement images. The results show that this method can detect potholes in asphalt pavement images with reasonable accuracy.
Resumo:
We present a model for early vision tasks such as denoising, super-resolution, deblurring, and demosaicing. The model provides a resolution-independent representation of discrete images which admits a truly rotationally invariant prior. The model generalizes several existing approaches: variational methods, finite element methods, and discrete random fields. The primary contribution is a novel energy functional which has not previously been written down, which combines the discrete measurements from pixels with a continuous-domain world viewed through continous-domain point-spread functions. The value of the functional is that simple priors (such as total variation and generalizations) on the continous-domain world become realistic priors on the sampled images. We show that despite its apparent complexity, optimization of this model depends on just a few computational primitives, which although tedious to derive, can now be reused in many domains. We define a set of optimization algorithms which greatly overcome the apparent complexity of this model, and make possible its practical application. New experimental results include infinite-resolution upsampling, and a method for obtaining subpixel superpixels. © 2012 IEEE.
Resumo:
This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based on Gibbs sampling and one based on variational Bayes. Importantly, these algorithms may be implemented in the factorization of very large matrices with missing entries. The model is evaluated on a collaborative filtering task, where users have rated a collection of movies and the system is asked to predict their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling outperforms variational Bayes on this task, despite the large number of ratings and model parameters. Matlab implementations of the proposed algorithms are available from cogsys.imm.dtu.dk/ordinalmatrixfactorization.