863 resultados para Gaussian kernel
Resumo:
In this paper, a novel and effective lip-based biometric identification approach with the Discrete Hidden Markov Model Kernel (DHMMK) is developed. Lips are described by shape features (both geometrical and sequential) on two different grid layouts: rectangular and polar. These features are then specifically modeled by a DHMMK, and learnt by a support vector machine classifier. Our experiments are carried out in a ten-fold cross validation fashion on three different datasets, GPDS-ULPGC Face Dataset, PIE Face Dataset and RaFD Face Dataset. Results show that our approach has achieved an average classification accuracy of 99.8%, 97.13%, and 98.10%, using only two training images per class, on these three datasets, respectively. Our comparative studies further show that the DHMMK achieved a 53% improvement against the baseline HMM approach. The comparative ROC curves also confirm the efficacy of the proposed lip contour based biometrics learned by DHMMK. We also show that the performance of linear and RBF SVM is comparable under the frame work of DHMMK.
Resumo:
Thesis (Master's)--University of Washington, 2015
Resumo:
In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
The first two articles build procedures to simulate vector of univariate states and estimate parameters in nonlinear and non Gaussian state space models. We propose state space speci fications that offer more flexibility in modeling dynamic relationship with latent variables. Our procedures are extension of the HESSIAN method of McCausland[2012]. Thus, they use approximation of the posterior density of the vector of states that allow to : simulate directly from the state vector posterior distribution, to simulate the states vector in one bloc and jointly with the vector of parameters, and to not allow data augmentation. These properties allow to build posterior simulators with very high relative numerical efficiency. Generic, they open a new path in nonlinear and non Gaussian state space analysis with limited contribution of the modeler. The third article is an essay in commodity market analysis. Private firms coexist with farmers' cooperatives in commodity markets in subsaharan african countries. The private firms have the biggest market share while some theoretical models predict they disappearance once confronted to farmers cooperatives. Elsewhere, some empirical studies and observations link cooperative incidence in a region with interpersonal trust, and thus to farmers trust toward cooperatives. We propose a model that sustain these empirical facts. A model where the cooperative reputation is a leading factor determining the market equilibrium of a price competition between a cooperative and a private firm
Resumo:
A numerical study is presented of the third-dimensional Gaussian random-field Ising model at T=0 driven by an external field. Standard synchronous relaxation dynamics is employed to obtain the magnetization versus field hysteresis loops. The focus is on the analysis of the number and size distribution of the magnetization avalanches. They are classified as being nonspanning, one-dimensional-spanning, two-dimensional-spanning, or three-dimensional-spanning depending on whether or not they span the whole lattice in different space directions. Moreover, finite-size scaling analysis enables identification of two different types of nonspanning avalanches (critical and noncritical) and two different types of three-dimensional-spanning avalanches (critical and subcritical), whose numbers increase with L as a power law with different exponents. We conclude by giving a scenario for avalanche behavior in the thermodynamic limit.
Resumo:
Spanning avalanches in the 3D Gaussian Random Field Ising Model (3D-GRFIM) with metastable dynamics at T=0 have been studied. Statistical analysis of the field values for which avalanches occur has enabled a Finite-Size Scaling (FSS) study of the avalanche density to be performed. Furthermore, a direct measurement of the geometrical properties of the avalanches has confirmed an earlier hypothesis that several types of spanning avalanches with two different fractal dimensions coexist at the critical point. We finally compare the phase diagram of the 3D-GRFIM with metastable dynamics with the same model in equilibrium at T=0.
Resumo:
In classical field theory, the ordinary potential V is an energy density for that state in which the field assumes the value ¢. In quantum field theory, the effective potential is the expectation value of the energy density for which the expectation value of the field is ¢o. As a result, if V has several local minima, it is only the absolute minimum that corresponds to the true ground state of the theory. Perturbation theory remains to this day the main analytical tool in the study of Quantum Field Theory. However, since perturbation theory is unable to uncover the whole rich structure of Quantum Field Theory, it is desirable to have some method which, on one hand, must go beyond both perturbation theory and classical approximation in the points where these fail, and at that time, be sufficiently simple that analytical calculations could be performed in its framework During the last decade a nonperturbative variational method called Gaussian effective potential, has been discussed widely together with several applications. This concept was described as a means of formalizing our intuitive understanding of zero-point fluctuation effects in quantum mechanics in a way that carries over directly to field theory.
Resumo:
In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.
Resumo:
Ricinodendron heudelotii (Baill.) Pierre ex Pax. kernel (njansang) commercialization has been promoted by the World Agroforestry Centre (ICRAF) in project villages in Cameroon with the aim to alleviate poverty for small-scale farmers. We evaluated to what extent development interventions improved the financial situation of households by comparing project and control households. The financial importance of njansang to household livelihoods between 2005 and 2010 was investigated through semi-structured questionnaires with retrospective questions, focus group discussions, interviews and wealth-ranking exercises. The importance of njansang increased strongly in the entire study region and the increase was significantly larger in project households. Moreover, absolute numbers of income from njansang commercialization as well as relative importance of njansang in total cash income, increased significantly more in project households (p < 0.05). Although the lower wealth class households could increase their income through njansang trade, the upper wealth class households benefited more from the projects' interventions. Group sales as conducted in project villages did not lead to significantly higher prices and should be reconsidered. Hence, promotion of njansang had a positive effect on total cash income and can still be improved. The corporative actors for njansang commercialization are encouraged to adapt their strategies to ensure that also the lower wealth class households benefit from the conducted project interventions. In this respect, frequent project monitoring and impact analysis are important tools to accomplish this adaptation.
Resumo:
Object recognition is complicated by clutter, occlusion, and sensor error. Since pose hypotheses are based on image feature locations, these effects can lead to false negatives and positives. In a typical recognition algorithm, pose hypotheses are tested against the image, and a score is assigned to each hypothesis. We use a statistical model to determine the score distribution associated with correct and incorrect pose hypotheses, and use binary hypothesis testing techniques to distinguish between them. Using this approach we can compare algorithms and noise models, and automatically choose values for internal system thresholds to minimize the probability of making a mistake.