986 resultados para Gaussian functions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GPML toolbox provides a wide range of functionality for Gaussian process (GP) inference and prediction. GPs are specified by mean and covariance functions; we offer a library of simple mean and covariance functions and mechanisms to compose more complex ones. Several likelihood functions are supported including Gaussian and heavy-tailed for regression as well as others suitable for classification. Finally, a range of inference methods is provided, including exact and variational inference, Expectation Propagation, and Laplace’s method dealing with non-Gaussian likelihoods and FITC for dealing with large regression tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growth of red sea urchins (Strongylocentrotus franciscanus) was modeled by using tag-recapture data from northern California. Red sea urchins (n=211) ranging in test diameter from 7 to 131 mm were examined for changes in size over one year. We used the function Jt+1 = Jt + f(Jt) to model growth, in which Jt is the jaw size (mm) at tagging, and Jt+1 is the jaw size one year later. The function f(Jt), represents one of six deterministic models: logistic dose response, Gaussian, Tanaka, Ricker, Richards, and von Bertalanffy with 3, 3, 3, 2, 3, and 2 minimization parameters, respectively. We found that three measures of goodness of fi t ranked the models similarly, in the order given. The results from these six models indicate that red sea urchins are slow growing animals (mean of 7.2 ±1.3 years to enter the fishery). We show that poor model selection or data from a limited range of urchin sizes (or both) produces erroneous growth parameter estimates and years-to-fishery estimates. Individual variation in growth dominated spatial variation at shallow and deep sites (F=0.246, n=199, P=0.62). We summarize the six models using a composite growth curve of jaw size, J, as a function of time, t: J = A(B – e–Ct) + Dt, in which each model is distinguished by the constants A, B, C, and D. We suggest that this composite model has the flexibility of the other six models and could be broadly applied. Given the robustness of our results regarding the number of years to enter the fishery, this information could be incorporated into future fishery management plans for red sea urchins in northern California.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kolmogorov's two-thirds, ((Δv) 2) ∼ e 2/ 3r 2/ 3, and five-thirds, E ∼ e 2/ 3k -5/ 3, laws are formally equivalent in the limit of vanishing viscosity, v → 0. However, for most Reynolds numbers encountered in laboratory scale experiments, or numerical simulations, it is invariably easier to observe the five-thirds law. By creating artificial fields of isotropic turbulence composed of a random sea of Gaussian eddies whose size and energy distribution can be controlled, we show why this is the case. The energy of eddies of scale, s, is shown to vary as s 2/ 3, in accordance with Kolmogorov's 1941 law, and we vary the range of scales, γ = s max/s min, in any one realisation from γ = 25 to γ = 800. This is equivalent to varying the Reynolds number in an experiment from R λ = 60 to R λ = 600. While there is some evidence of a five-thirds law for g > 50 (R λ > 100), the two-thirds law only starts to become apparent when g approaches 200 (R λ ∼ 240). The reason for this discrepancy is that the second-order structure function is a poor filter, mixing information about energy and enstrophy, and from scales larger and smaller than r. In particular, in the inertial range, ((Δv) 2) takes the form of a mixed power-law, a 1+a 2r 2+a 3r 2/ 3, where a 2r 2 tracks the variation in enstrophy and a 3r 2/ 3 the variation in energy. These findings are shown to be consistent with experimental data where the polution of the r 2/ 3 law by the enstrophy contribution, a 2r 2, is clearly evident. We show that higherorder structure functions (of even order) suffer from a similar deficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models. GPs are gaining increasing importance in signal processing, machine learning, robotics, and control for representing unknown system functions by posterior probability distributions. This modern way of system identification is more robust than finding point estimates of a parametric function representation. Our principled filtering/smoothing approach for GP dynamic systems is based on analytic moment matching in the context of the forward-backward algorithm. Our numerical evaluations demonstrate the robustness of the proposed approach in situations where other state-of-the-art Gaussian filters and smoothers can fail. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Copulas allow to learn marginal distributions separately from the multivariate dependence structure (copula) that links them together into a density function. Vine factorizations ease the learning of high-dimensional copulas by constructing a hierarchy of conditional bivariate copulas. However, to simplify inference, it is common to assume that each of these conditional bivariate copulas is independent from its conditioning variables. In this paper, we relax this assumption by discovering the latent functions that specify the shape of a conditional copula given its conditioning variables We learn these functions by following a Bayesian approach based on sparse Gaussian processes with expectation propagation for scalable, approximate inference. Experiments on real-world datasets show that, when modeling all conditional dependencies, we obtain better estimates of the underlying copula of the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The code provided here originally demonstrated the main algorithms from Rasmussen and Williams: Gaussian Processes for Machine Learning. It has since grown to allow more likelihood functions, further inference methods and a flexible framework for specifying GPs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We demonstrate how a prior assumption of smoothness can be used to enhance the reconstruction of free energy profiles from multiple umbrella sampling simulations using the Bayesian Gaussian process regression approach. The method we derive allows the concurrent use of histograms and free energy gradients and can easily be extended to include further data. In Part I we review the necessary theory and test the method for one collective variable. We demonstrate improved performance with respect to the weighted histogram analysis method and obtain meaningful error bars without any significant additional computation. In Part II we consider the case of multiple collective variables and compare to a reconstruction using least squares fitting of radial basis functions. We find substantial improvements in the regimes of spatially sparse data or short sampling trajectories. A software implementation is made available on www.libatoms.org.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the Student-t process as an alternative to the Gaussian process as a non-parametric prior over functions. We derive closed form expressions for the marginal likelihood and predictive distribution of a Student-t process, by integrating away an inverse Wishart process prior over the co-variance kernel of a Gaussian process model. We show surprising equivalences between different hierarchical Gaussian process models leading to Student-t processes, and derive a new sampling scheme for the inverse Wishart process, which helps elucidate these equivalences. Overall, we show that a Student-t process can retain the attractive properties of a Gaussian process - a nonparamet-ric representation, analytic marginal and predictive distributions, and easy model selection through covariance kernels - but has enhanced flexibility, and predictive covariances that, unlike a Gaussian process, explicitly depend on the values of training observations. We verify empirically that a Student-t process is especially useful in situations where there are changes in covariance structure, or in applications such as Bayesian optimization, where accurate predictive covariances are critical for good performance. These advantages come at no additional computational cost over Gaussian processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2010 by the American Geophysical Union.The cross-scale probabilistic structure of rainfall intensity records collected over time scales ranging from hours to decades at sites dominated by both convective and frontal systems is investigated. Across these sites, intermittency build-up from slow to fast time-scales is analyzed in terms of heavy tailed and asymmetric signatures in the scale-wise evolution of rainfall probability density functions (pdfs). The analysis demonstrates that rainfall records dominated by convective storms develop heavier-Tailed power law pdfs toward finer scales when compared with their frontal systems counterpart. Also, a concomitant marked asymmetry build-up emerges at such finer time scales. A scale-dependent probabilistic description of such fat tails and asymmetry appearance is proposed based on a modified q-Gaussian model, able to describe the cross-scale rainfall pdfs in terms of the nonextensivity parameter q, a lacunarity (intermittency) correction and a tail asymmetry coefficient, linked to the rainfall generation mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Incoherent Thomson scattering (ITS) provides a nonintrusive diagnostic for the determination of one-dimensional (1D) electron velocity distribution in plasmas. When the ITS spectrum is Gaussian its interpretation as a three-dimensional (3D) Maxwellian velocity distribution is straightforward. For more complex ITS line shapes derivation of the corresponding 3D velocity distribution and electron energy probability distribution function is more difficult. This article reviews current techniques and proposes an approach to making the transformation between a 1D velocity distribution and the corresponding 3D energy distribution. Previous approaches have either transformed the ITS spectra directly from a 1D distribution to a 3D or fitted two Gaussians assuming a Maxwellian or bi-Maxwellian distribution. Here, the measured ITS spectrum transformed into a 1D velocity distribution and the probability of finding a particle with speed within 0 and given value v is calculated. The differentiation of this probability function is shown to be the normalized electron velocity distribution function. (C) 2003 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stochastic nature of oil price fluctuations is investigated over a twelve-year period, borrowing feedback from an existing database (USA Energy Information Administration database, available online). We evaluate the scaling exponents of the fluctuations by employing different statistical analysis methods, namely rescaled range analysis (R/S), scale windowed variance analysis (SWV) and the generalized Hurst exponent (GH) method. Relying on the scaling exponents obtained, we apply a rescaling procedure to investigate the complex characteristics of the probability density functions (PDFs) dominating oil price fluctuations. It is found that PDFs exhibit scale invariance, and in fact collapse onto a single curve when increments are measured over microscales (typically less than 30 days). The time evolution of the distributions is well fitted by a Levy-type stable distribution. The relevance of a Levy distribution is made plausible by a simple model of nonlinear transfer. Our results also exhibit a degree of multifractality as the PDFs change and converge toward to a Gaussian distribution at the macroscales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.