990 resultados para Positive Definite Functions
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
Wigner functions play a central role in the phase space formulation of quantum mechanics. Although closely related to classical Liouville densities, Wigner functions are not positive definite and may take negative values on subregions of phase space. We investigate the accumulation of these negative values by studying bounds on the integral of an arbitrary Wigner function over noncompact subregions of the phase plane with hyperbolic boundaries. We show using symmetry techniques that this problem reduces to computing the bounds on the spectrum associated with an exactly solvable eigenvalue problem and that the bounds differ from those on classical Liouville distributions. In particular, we show that the total "quasiprobability" on such a region can be greater than 1 or less than zero. (C) 2005 American Institute of Physics.
Resumo:
The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.
Resumo:
This article presents maximum likelihood estimators (MLEs) and log-likelihood ratio (LLR) tests for the eigenvalues and eigenvectors of Gaussian random symmetric matrices of arbitrary dimension, where the observations are independent repeated samples from one or two populations. These inference problems are relevant in the analysis of diffusion tensor imaging data and polarized cosmic background radiation data, where the observations are, respectively, 3 x 3 and 2 x 2 symmetric positive definite matrices. The parameter sets involved in the inference problems for eigenvalues and eigenvectors are subsets of Euclidean space that are either affine subspaces, embedded submanifolds that are invariant under orthogonal transformations or polyhedral convex cones. We show that for a class of sets that includes the ones considered in this paper, the MLEs of the mean parameter do not depend on the covariance parameters if and only if the covariance structure is orthogonally invariant. Closed-form expressions for the MLEs and the associated LLRs are derived for this covariance structure.
Resumo:
Classical mechanics is formulated in complex Hilbert space with the introduction of a commutative product of operators, an antisymmetric bracket and a quasidensity operator that is not positive definite. These are analogues of the star product, the Moyal bracket, and the Wigner function in the phase space formulation of quantum mechanics. Quantum mechanics is then viewed as a limiting form of classical mechanics, as Planck's constant approaches zero, rather than the other way around. The forms of semiquantum approximations to classical mechanics, analogous to semiclassical approximations to quantum mechanics, are indicated.
Resumo:
In this paper we propose a parsimonious regime-switching approach to model the correlations between assets, the threshold conditional correlation (TCC) model. This method allows the dynamics of the correlations to change from one state (or regime) to another as a function of observable transition variables. Our model is similar in spirit to Silvennoinen and Teräsvirta (2009) and Pelletier (2006) but with the appealing feature that it does not suffer from the course of dimensionality. In particular, estimation of the parameters of the TCC involves a simple grid search procedure. In addition, it is easy to guarantee a positive definite correlation matrix because the TCC estimator is given by the sample correlation matrix, which is positive definite by construction. The methodology is illustrated by evaluating the behaviour of international equities, govenrment bonds and major exchange rates, first separately and then jointly. We also test and allow for different parts in the correlation matrix to be governed by different transition variables. For this, we estimate a multi-threshold TCC specification. Further, we evaluate the economic performance of the TCC model against a constant conditional correlation (CCC) estimator using a Diebold-Mariano type test. We conclude that threshold correlation modelling gives rise to a significant reduction in portfolio´s variance.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
Symmetrical behaviour of the covariance matrix and the positive-definite criterion are used to simplify identification of single-input/single-output systems using recursive least squares. Simulation results are obtained and these are compared with ordinary recursive least squares. The adaptive nature of the identifier is verified by varying the system parameters on convergence.
Resumo:
In this paper we consider boundary integral methods applied to boundary value problems for the positive definite Helmholtz-type problem -DeltaU + alpha U-2 = 0 in a bounded or unbounded domain, with the parameter alpha real and possibly large. Applications arise in the implementation of space-time boundary integral methods for the heat equation, where alpha is proportional to 1/root deltat, and deltat is the time step. The corresponding layer potentials arising from this problem depend nonlinearly on the parameter alpha and have kernels which become highly peaked as alpha --> infinity, causing standard discretization schemes to fail. We propose a new collocation method with a robust convergence rate as alpha --> infinity. Numerical experiments on a model problem verify the theoretical results.
Resumo:
The energy–Casimir method is applied to the problem of symmetric stability in the context of a compressible, hydrostatic planetary atmosphere with a general equation of state. Formal stability criteria for symmetric disturbances to a zonally symmetric baroclinic flow are obtained. In the special case of a perfect gas the results of Stevens (1983) are recovered. Finite-amplitude stability conditions are also obtained that provide an upper bound on a certain positive-definite measure of disturbance amplitude.
Resumo:
Traditional derivations of available potential energy, in a variety of contexts, involve combining some form of mass conservation together with energy conservation. This raises the questions of why such constructions are required in the first place, and whether there is some general method of deriving the available potential energy for an arbitrary fluid system. By appealing to the underlying Hamiltonian structure of geophysical fluid dynamics, it becomes clear why energy conservation is not enough, and why other conservation laws such as mass conservation need to be incorporated in order to construct an invariant, known as the pseudoenergy, that is a positive‐definite functional of disturbance quantities. The available potential energy is just the non‐kinetic part of the pseudoenergy, the construction of which follows a well defined algorithm. Two notable features of the available potential energy defined thereby are first, that it is a locally defined quantity, and second, that it is inherently definable at finite amplitude (though one may of course always take the small‐amplitude limit if this is appropriate). The general theory is made concrete by systematic derivations of available potential energy in a number of different contexts. All the well known expressions are recovered, and some new expressions are obtained. The possibility of generalizing the concept of available potential energy to dynamically stable basic flows (as opposed to statically stable basic states) is also discussed.
Resumo:
Exact, finite-amplitude, local wave-activity conservation laws are derived for disturbances to steady flows in the context of the two-dimensional anelastic equations. The conservation laws are expressed entirely in terms of Eulerian quantities, and have the property that, in the limit of a small-amplitude, slowly varying, monochromatic wave train, the wave-activity density A and flux F, when averaged over phase, satisfy F = cgA where cg is the group velocity of the waves. For nonparallel steady flows, the only conserved wave activity is a form of disturbance pseudoenergy; when the steady flow is parallel, there is in addition a conservation law for the disturbance pseudomomentum. The above results are obtained not only for isentropic background states (which give the so-called “deep form” of the anelastic equations), but also for arbitrary background potential-temperature profiles θ0(z) so long as the variation in θ0(z) over the depth of the fluid is small compared with θ0 itself. The Hamiltonian structure of the equations is established in both cases, and its symmetry properties discussed. An expression for available potential energy is also derived that, for the case of a stably stratified background state (i.e., dθ0/dz > 0), is locally positive definite; the expression is valid for fully three-dimensional flow. The counterparts to results for the two-dimensional Boussinesq equations are also noted.
Resumo:
The thermodynamic properties of dark energy fluids described by an equation of state parameter omega = p/rho are rediscussed in the context of FRW type geometries. Contrarily to previous claims, it is argued here that the phantom regime omega < -1 is not physically possible since that both the temperature and the entropy of every physical fluids must be always positive definite. This means that one cannot appeal to negative temperature in order to save the phantom dark energy hypothesis as has been recently done in the literature. Such a result remains true as long as the chemical potential is zero. However, if the phantom fluid is endowed with a non-null chemical potential, the phantom field hypothesis becomes thermodynamically consistent, that is, there are macroscopic equilibrium states with T > 0 and S > 0 in the course of the Universe expansion. (C) 2008 Elsevier B.V. All rights reserved.