65 resultados para POSITIVE DEFINITE KERNELS
Resumo:
We introduce a stochastic process with Wishart marginals: the generalised Wishart process (GWP). It is a collection of positive semi-definite random matrices indexed by any arbitrary dependent variable. We use it to model dynamic (e.g. time varying) covariance matrices. Unlike existing models, it can capture a diverse class of covariance structures, it can easily handle missing data, the dependent variable can readily include covariates other than time, and it scales well with dimension; there is no need for free parameters, and optional parameters are easy to interpret. We describe how to construct the GWP, introduce general procedures for inference and predictions, and show that it outperforms its main competitor, multivariate GARCH, even on financial data that especially suits GARCH. We also show how to predict the mean of a multivariate process while accounting for dynamic correlations.
Resumo:
The purpose of this paper is to continue to develop the recently introduced concept of a regular positive-real function and its application to the classification of low-complexity two-terminal networks. This paper studies five- and six-element series-parallel networks with three reactive elements and presents a complete characterisation and graphical representation of the realisability conditions for these networks. The results are motivated by an approach to passive mechanical control which makes use of the inerter device. ©2009 IEEE.
Resumo:
The movement of the circular piston in an oscillating piston positive displacement flowmeter is important in understanding the operation of the flowmeter, and the leakage of liquid past the piston plays a key role in the performance of the meter. The clearances between the piston and the chamber are small, typically less than 60 νm. In order to measure this film thickness a fluorescent dye was added to the water passing through the meter, which was illuminated with UV light. Visible light images were captured with a digital camera and analysed to give a measure of the film thickness with an uncertainty of less than 7%. It is known that this method lacks precision unless careful calibration is undertaken. Methods to achieve this are discussed in the paper. The grey level values for a range of film thicknesses were calibrated in situ with six dye concentrations to select the most appropriate one for the range of liquid film thickness. Data obtained for the oscillating piston flowmeter demonstrate the value of the fluorescence technique. The method is useful, inexpensive and straightforward and can be extended to other applications where measurement of liquid film thickness is required. © 2011 IOP Publishing Ltd.
Resumo:
Recently there has been interest in combined gen- erative/discriminative classifiers. In these classifiers features for the discriminative models are derived from generative kernels. One advantage of using generative kernels is that systematic approaches exist how to introduce complex dependencies beyond conditional independence assumptions. Furthermore, by using generative kernels model-based compensation/adaptation tech- niques can be applied to make discriminative models robust to noise/speaker conditions. This paper extends previous work with combined generative/discriminative classifiers in several directions. First, it introduces derivative kernels based on context- dependent generative models. Second, it describes how derivative kernels can be incorporated in continuous discriminative models. Third, it addresses the issues associated with large number of classes and parameters when context-dependent models and high- dimensional features of derivative kernels are used. The approach is evaluated on two noise-corrupted tasks: small vocabulary AURORA 2 and medium-to-large vocabulary AURORA 4 task.
Resumo:
Recently there has been interest in combining generative and discriminative classifiers. In these classifiers features for the discriminative models are derived from the generative kernels. One advantage of using generative kernels is that systematic approaches exist to introduce complex dependencies into the feature-space. Furthermore, as the features are based on generative models standard model-based compensation and adaptation techniques can be applied to make discriminative models robust to noise and speaker conditions. This paper extends previous work in this framework in several directions. First, it introduces derivative kernels based on context-dependent generative models. Second, it describes how derivative kernels can be incorporated in structured discriminative models. Third, it addresses the issues associated with large number of classes and parameters when context-dependent models and high-dimensional feature-spaces of derivative kernels are used. The approach is evaluated on two noise-corrupted tasks: small vocabulary AURORA 2 and medium-to-large vocabulary AURORA 4 task. © 2011 IEEE.
Resumo:
Due to the Fermi-Dirac statistics of electrons the temporal correlations of tunneling events in a double barrier setup are typically negative. Here, we investigate the shot noise behavior of a system of two capacitively coupled quantum dot states by means of a Master equation model. In an asymmetric setup positive correlations in the tunneling current can arise due to the bunching of tunneling events. The underlying mechanism will be discussed in detail in terms of the current-current correlation function and the frequency-dependent Fano factor.
Resumo:
We consider the problem of positive observer design for positive systems defined on solid cones in Banach spaces. The design is based on the Hilbert metric and convergence properties are analyzed in the light of the Birkhoff theorem. Two main applications are discussed: positive observers for systems defined in the positive orthant, and positive observers on the cone of positive semi-definite matrices with a view on quantum systems. © 2011 IEEE.
Resumo:
The paper addresses the problem of learning a regression model parameterized by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to high-dimensional problems. The mathematical developments rely on the theory of gradient descent algorithms adapted to the Riemannian geometry that underlies the set of fixedrank positive semidefinite matrices. In contrast with previous contributions in the literature, no restrictions are imposed on the range space of the learned matrix. The resulting algorithms maintain a linear complexity in the problem size and enjoy important invariance properties. We apply the proposed algorithms to the problem of learning a distance function parameterized by a positive semidefinite matrix. Good performance is observed on classical benchmarks. © 2011 Gilles Meyer, Silvere Bonnabel and Rodolphe Sepulchre.