80 resultados para GENERALIZED CAUCHY FORMULA
Resumo:
This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.
Resumo:
Motivated by the work of Mateu, Orobitg, Pérez and Verdera, who proved inequalities of the form $T_*f\lesssim M(Tf)$ or $T_*f\lesssim M^2(Tf)$ for certain singular integral operators $T$, such as the Hilbert or the Beurling transforms, we study the possibility of establishing this type of control for the Cauchy transform along a Lipschitz graph. We show that this is not possible in general, and we give a partial positive result when the graph is substituted by a Jordan curve.
Resumo:
This contribution compares existing and newly developed techniques for geometrically representing mean-variances-kewness portfolio frontiers based on the rather widely adapted methodology of polynomial goal programming (PGP) on the one hand and the more recent approach based on the shortage function on the other hand. Moreover, we explain the working of these different methodologies in detail and provide graphical illustrations. Inspired by these illustrations, we prove a generalization of the well-known two fund separation theorem from traditionalmean-variance portfolio theory.
Resumo:
Comparison of donor-acceptor electronic couplings calculated within two-state and three-state models suggests that the two-state treatment can provide unreliable estimates of Vda because of neglecting the multistate effects. We show that in most cases accurate values of the electronic coupling in a π stack, where donor and acceptor are separated by a bridging unit, can be obtained as Ṽ da = (E2 - E1) μ12 Rda + (2 E3 - E1 - E2) 2 μ13 μ23 Rda2, where E1, E2, and E3 are adiabatic energies of the ground, charge-transfer, and bridge states, respectively, μij is the transition dipole moments between the states i and j, and Rda is the distance between the planes of donor and acceptor. In this expression based on the generalized Mulliken-Hush approach, the first term corresponds to the coupling derived within a two-state model, whereas the second term is the superexchange correction accounting for the bridge effect. The formula is extended to bridges consisting of several subunits. The influence of the donor-acceptor energy mismatch on the excess charge distribution, adiabatic dipole and transition moments, and electronic couplings is examined. A diagnostic is developed to determine whether the two-state approach can be applied. Based on numerical results, we showed that the superexchange correction considerably improves estimates of the donor-acceptor coupling derived within a two-state approach. In most cases when the two-state scheme fails, the formula gives reliable results which are in good agreement (within 5%) with the data of the three-state generalized Mulliken-Hush model
Resumo:
A new graph-based construction of generalized low density codes (GLD-Tanner) with binary BCH constituents is described. The proposed family of GLD codes is optimal on block erasure channels and quasi-optimal on block fading channels. Optimality is considered in the outage probability sense. Aclassical GLD code for ergodic channels (e.g., the AWGN channel,the i.i.d. Rayleigh fading channel, and the i.i.d. binary erasure channel) is built by connecting bitnodes and subcode nodes via a unique random edge permutation. In the proposed construction of full-diversity GLD codes (referred to as root GLD), bitnodes are divided into 4 classes, subcodes are divided into 2 classes, and finally both sides of the Tanner graph are linked via 4 random edge permutations. The study focuses on non-ergodic channels with two states and can be easily extended to channels with 3 states or more.
Resumo:
In this paper we investigate the goodness of fit of the Kirk's approximation formula for spread option prices in the correlated lognormal framework. Towards this end, we use the Malliavin calculus techniques to find an expression for the short-time implied volatility skew of options with random strikes. In particular, we obtain that this skew is very pronounced in the case of spread options with extremely high correlations, which cannot be reproduced by a constant volatility approximation as in the Kirk's formula. This fact agrees with the empirical evidence. Numerical examples are given.
Resumo:
Asymptotic chi-squared test statistics for testing the equality ofmoment vectors are developed. The test statistics proposed aregeneralizedWald test statistics that specialize for different settings by inserting andappropriate asymptotic variance matrix of sample moments. Scaled teststatisticsare also considered for dealing with situations of non-iid sampling. Thespecializationwill be carried out for testing the equality of multinomial populations, andtheequality of variance and correlation matrices for both normal andnon-normaldata. When testing the equality of correlation matrices, a scaled versionofthe normal theory chi-squared statistic is proven to be an asymptoticallyexactchi-squared statistic in the case of elliptical data.
Resumo:
We see that the price of an european call option in a stochastic volatilityframework can be decomposed in the sum of four terms, which identifythe main features of the market that affect to option prices: the expectedfuture volatility, the correlation between the volatility and the noisedriving the stock prices, the market price of volatility risk and thedifference of the expected future volatility at different times. We alsostudy some applications of this decomposition.
Resumo:
By means of Malliavin Calculus we see that the classical Hull and White formulafor option pricing can be extended to the case where the noise driving thevolatility process is correlated with the noise driving the stock prices. Thisextension will allow us to construct option pricing approximation formulas.Numerical examples are presented.
Resumo:
The Generalized Assignment Problem consists in assigning a setof tasks to a set of agents with minimum cost. Each agent hasa limited amount of a single resource and each task must beassigned to one and only one agent, requiring a certain amountof the resource of the agent. We present new metaheuristics forthe generalized assignment problem based on hybrid approaches.One metaheuristic is a MAX-MIN Ant System (MMAS), an improvedversion of the Ant System, which was recently proposed byStutzle and Hoos to combinatorial optimization problems, and itcan be seen has an adaptive sampling algorithm that takes inconsideration the experience gathered in earlier iterations ofthe algorithm. Moreover, the latter heuristic is combined withlocal search and tabu search heuristics to improve the search.A greedy randomized adaptive search heuristic (GRASP) is alsoproposed. Several neighborhoods are studied, including one basedon ejection chains that produces good moves withoutincreasing the computational effort. We present computationalresults of the comparative performance, followed by concludingremarks and ideas on future research in generalized assignmentrelated problems.
Resumo:
A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.
Resumo:
By means of classical Itô's calculus we decompose option prices asthe sum of the classical Black-Scholes formula with volatility parameterequal to the root-mean-square future average volatility plus a term dueby correlation and a term due to the volatility of the volatility. Thisdecomposition allows us to develop first and second-order approximationformulas for option prices and implied volatilities in the Heston volatilityframework, as well as to study their accuracy. Numerical examples aregiven.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
In this paper, generalizing results in Alòs, León and Vives (2007b), we see that the dependence of jumps in the volatility under a jump-diffusion stochastic volatility model, has no effect on the short-time behaviour of the at-the-money implied volatility skew, although the corresponding Hull and White formula depends on the jumps. Towards this end, we use Malliavin calculus techniques for Lévy processes based on Løkka (2004), Petrou (2006), and Solé, Utzet and Vives (2007).
Resumo:
This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.