243 resultados para GAUSSIAN GENERATOR FUNCTIONS

em Cambridge University Engineering Department Publications Database


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the Gaussian process density sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a distribution defined by a density that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We describe two such MCMC methods. Both methods also allow inference of the hyperparameters of the Gaussian process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GPML toolbox provides a wide range of functionality for Gaussian process (GP) inference and prediction. GPs are specified by mean and covariance functions; we offer a library of simple mean and covariance functions and mechanisms to compose more complex ones. Several likelihood functions are supported including Gaussian and heavy-tailed for regression as well as others suitable for classification. Finally, a range of inference methods is provided, including exact and variational inference, Expectation Propagation, and Laplace’s method dealing with non-Gaussian likelihoods and FITC for dealing with large regression tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kolmogorov's two-thirds, ((Δv) 2) ∼ e 2/ 3r 2/ 3, and five-thirds, E ∼ e 2/ 3k -5/ 3, laws are formally equivalent in the limit of vanishing viscosity, v → 0. However, for most Reynolds numbers encountered in laboratory scale experiments, or numerical simulations, it is invariably easier to observe the five-thirds law. By creating artificial fields of isotropic turbulence composed of a random sea of Gaussian eddies whose size and energy distribution can be controlled, we show why this is the case. The energy of eddies of scale, s, is shown to vary as s 2/ 3, in accordance with Kolmogorov's 1941 law, and we vary the range of scales, γ = s max/s min, in any one realisation from γ = 25 to γ = 800. This is equivalent to varying the Reynolds number in an experiment from R λ = 60 to R λ = 600. While there is some evidence of a five-thirds law for g > 50 (R λ > 100), the two-thirds law only starts to become apparent when g approaches 200 (R λ ∼ 240). The reason for this discrepancy is that the second-order structure function is a poor filter, mixing information about energy and enstrophy, and from scales larger and smaller than r. In particular, in the inertial range, ((Δv) 2) takes the form of a mixed power-law, a 1+a 2r 2+a 3r 2/ 3, where a 2r 2 tracks the variation in enstrophy and a 3r 2/ 3 the variation in energy. These findings are shown to be consistent with experimental data where the polution of the r 2/ 3 law by the enstrophy contribution, a 2r 2, is clearly evident. We show that higherorder structure functions (of even order) suffer from a similar deficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models. GPs are gaining increasing importance in signal processing, machine learning, robotics, and control for representing unknown system functions by posterior probability distributions. This modern way of system identification is more robust than finding point estimates of a parametric function representation. Our principled filtering/smoothing approach for GP dynamic systems is based on analytic moment matching in the context of the forward-backward algorithm. Our numerical evaluations demonstrate the robustness of the proposed approach in situations where other state-of-the-art Gaussian filters and smoothers can fail. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Copulas allow to learn marginal distributions separately from the multivariate dependence structure (copula) that links them together into a density function. Vine factorizations ease the learning of high-dimensional copulas by constructing a hierarchy of conditional bivariate copulas. However, to simplify inference, it is common to assume that each of these conditional bivariate copulas is independent from its conditioning variables. In this paper, we relax this assumption by discovering the latent functions that specify the shape of a conditional copula given its conditioning variables We learn these functions by following a Bayesian approach based on sparse Gaussian processes with expectation propagation for scalable, approximate inference. Experiments on real-world datasets show that, when modeling all conditional dependencies, we obtain better estimates of the underlying copula of the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The code provided here originally demonstrated the main algorithms from Rasmussen and Williams: Gaussian Processes for Machine Learning. It has since grown to allow more likelihood functions, further inference methods and a flexible framework for specifying GPs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We demonstrate how a prior assumption of smoothness can be used to enhance the reconstruction of free energy profiles from multiple umbrella sampling simulations using the Bayesian Gaussian process regression approach. The method we derive allows the concurrent use of histograms and free energy gradients and can easily be extended to include further data. In Part I we review the necessary theory and test the method for one collective variable. We demonstrate improved performance with respect to the weighted histogram analysis method and obtain meaningful error bars without any significant additional computation. In Part II we consider the case of multiple collective variables and compare to a reconstruction using least squares fitting of radial basis functions. We find substantial improvements in the regimes of spatially sparse data or short sampling trajectories. A software implementation is made available on www.libatoms.org.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the Student-t process as an alternative to the Gaussian process as a non-parametric prior over functions. We derive closed form expressions for the marginal likelihood and predictive distribution of a Student-t process, by integrating away an inverse Wishart process prior over the co-variance kernel of a Gaussian process model. We show surprising equivalences between different hierarchical Gaussian process models leading to Student-t processes, and derive a new sampling scheme for the inverse Wishart process, which helps elucidate these equivalences. Overall, we show that a Student-t process can retain the attractive properties of a Gaussian process - a nonparamet-ric representation, analytic marginal and predictive distributions, and easy model selection through covariance kernels - but has enhanced flexibility, and predictive covariances that, unlike a Gaussian process, explicitly depend on the values of training observations. We verify empirically that a Student-t process is especially useful in situations where there are changes in covariance structure, or in applications such as Bayesian optimization, where accurate predictive covariances are critical for good performance. These advantages come at no additional computational cost over Gaussian processes.

Relevância:

20.00% 20.00%

Publicador: