137 resultados para Error-resilient Applications
Resumo:
By means of Malliavin Calculus we see that the classical Hull and White formulafor option pricing can be extended to the case where the noise driving thevolatility process is correlated with the noise driving the stock prices. Thisextension will allow us to construct option pricing approximation formulas.Numerical examples are presented.
Resumo:
We present a new general concentration-of-measure inequality and illustrate its power by applications in random combinatorics. The results find direct applications in some problems of learning theory.
Resumo:
By means of classical Itô's calculus we decompose option prices asthe sum of the classical Black-Scholes formula with volatility parameterequal to the root-mean-square future average volatility plus a term dueby correlation and a term due to the volatility of the volatility. Thisdecomposition allows us to develop first and second-order approximationformulas for option prices and implied volatilities in the Heston volatilityframework, as well as to study their accuracy. Numerical examples aregiven.
Resumo:
In this paper, generalizing results in Alòs, León and Vives (2007b), we see that the dependence of jumps in the volatility under a jump-diffusion stochastic volatility model, has no effect on the short-time behaviour of the at-the-money implied volatility skew, although the corresponding Hull and White formula depends on the jumps. Towards this end, we use Malliavin calculus techniques for Lévy processes based on Løkka (2004), Petrou (2006), and Solé, Utzet and Vives (2007).
Resumo:
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
This paper proposes a new time-domain test of a process being I(d), 0 < d = 1, under the null, against the alternative of being I(0) with deterministic components subject to structural breaks at known or unknown dates, with the goal of disentangling the existing identification issue between long-memory and structural breaks. Denoting by AB(t) the different types of structural breaks in the deterministic components of a time series considered by Perron (1989), the test statistic proposed here is based on the t-ratio (or the infimum of a sequence of t-ratios) of the estimated coefficient on yt-1 in an OLS regression of ?dyt on a simple transformation of the above-mentioned deterministic components and yt-1, possibly augmented by a suitable number of lags of ?dyt to account for serial correlation in the error terms. The case where d = 1 coincides with the Perron (1989) or the Zivot and Andrews (1992) approaches if the break date is known or unknown, respectively. The statistic is labelled as the SB-FDF (Structural Break-Fractional Dickey- Fuller) test, since it is based on the same principles as the well-known Dickey-Fuller unit root test. Both its asymptotic behavior and finite sample properties are analyzed, and two empirical applications are provided.
Resumo:
Un dels principals problemes quan es realitza un anàlisi de contorns és la gran quantitat de dades implicades en la descripció de la figura. Per resoldre aquesta problemàtica, s’aplica la parametrització que consisteix en obtenir d’un contorn unes dades representatives amb els mínims coeficients possibles, a partir dels quals es podrà reconstruir de nou sense pèrdues molt evidents d’informació. En figures de contorns tancats, la parametrització més estudiada és l’aplicació de la transformada discreta de Fourier (DFT). Aquesta s’aplica a la seqüència de valors que descriu el comportament de les coordenades x i y al llarg de tots els punts que formen el traç. A diferència, en els contorns oberts no es pot aplicar directament la DFT ja que per fer-ho es necessita que el valor de x i de y siguin iguals tan en el primer punt del contorn com en l’últim. Això és degut al fet que la DFT representa sense error senyals periòdics. Si els senyals no acaben en el mateix punt, representa que hi ha una discontinuïtat i apareixen oscil·lacions a la reconstrucció. L’objectiu d’aquest treball és parametritzar contorns oberts amb la mateixa eficiència que s’obté en la parametrització de contorns tancats. Per dur-ho a terme, s’ha dissenyat un programa que permet aplicar la DFT en contorns oberts mitjançant la modificació de les seqüencies de x i y. A més a més, també utilitzant el programari Matlab s’han desenvolupat altres aplicacions que han permès veure diferents aspectes sobre la parametrització i com es comporten els Descriptors El·líptics de Fourier (EFD). Els resultats obtinguts han demostrat que l’aplicació dissenyada permet la parametrització de contorns oberts amb compressions òptimes, fet que facilitarà l’anàlisi quantitatiu de formes en camps com l’ecologia, medicina, geografia, entre d’altres.
Resumo:
The proposal to work on this final project came after several discussions held with Dr. Elzbieta Malinowski Gadja, who in 2008 published the book entitled Advanced Data Warehouse Design: From Conventional to Spatial and Temporal Applications (Data-Centric Systems and Applications). The project was carried out under the technical supervision of Dr. Malinowski and the direct beneficiary was the University of Costa Rica (UCR) where Dr. Malinowski is a professor at the Department of Computer Science and Informatics. The purpose of this project was twofold: First, to translate chapter III of said book with the intention of generating educational material for the use of the UCR and, second, to venture in the field of technical translation related to data warehouse. For the first component, the goal was to generate a final product that would eventually serve as an educational tool for the post-graduate courses of the UCR. For the second component, this project allowed me to acquire new skills and put into practice techniques that have helped me not only to perfom better in my current job as an Assistant Translator of the Inter-American BAnk (IDB), but also to use them in similar projects. The process was lenggthy and required torough research and constant communication with the author. The investigation focused on the search of terms and definitions to prepare the glossary, which was the basis to start the translation project. The translation process itself was carried out by phases, so that comments and corrections by the author could be taken into account in subsequent stages. Later, based on the glossary and the translated text, illustrations had been created in the Visio software were translated. In addition to the technical revision by the author, professor Carme Mangiron was in charge of revising the non-technical text. The result was a high-quality document that is currently used as reference and study material by the Department of Computer Science and Informatics of Costa Rica.
Resumo:
Projective homography sits at the heart of many problems in image registration. In addition to many methods for estimating the homography parameters (R.I. Hartley and A. Zisserman, 2000), analytical expressions to assess the accuracy of the transformation parameters have been proposed (A. Criminisi et al., 1999). We show that these expressions provide less accurate bounds than those based on the earlier results of Weng et al. (1989). The discrepancy becomes more critical in applications involving the integration of frame-to-frame homographies and their uncertainties, as in the reconstruction of terrain mosaics and the camera trajectory from flyover imagery. We demonstrate these issues through selected examples
Resumo:
En ciencias de la educación, las últimas décadas han estado marcadas por un interés en las ideas de Lev S. Vygotski. De hecho, a partir de esas ideas se han propuesto varias aplicaciones educativas. Una de ellas es el “Key to learning”. El artículo propone una visión general de este programa educativo desarrollado a partir de algunos trabajos e ideas de autores rusos contemporáneos. Primero, desarrollamos algunas ideas en torno a la noción de zona de desarrollo próximo (ZpD). Después, sugerimos la teoría de las habilidades de aprendizaje. En este sentido, el objetivo principal de “Key to learning” es mejorar las habilidades de aprendizaje cognitivas, comunicativas y directivas de niños de entre 3 a 7 años de edad. Para este propósito son creadas 12 unidades curriculares que componen el programa. Para concluir se enfatiza la creación de zonas de desarrollo próximo estructuradas como parte de un sistema de enseñanza y aprendizaje que vincula la actividad, la asistencia y la agencia
Resumo:
Infinitely near base points and Enriques' unloading procedure are used to construct filtrations by complete ideals of C{x, y}. It follows a procedure for getting generators of the integral closure of an ideal.
Resumo:
Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.