962 resultados para Auctions Econometrics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Treatise on Quadrature of Fermat (c. 1659), besides containing the first known proof of the computation of the area under a higher parabola, R x+m/n dx, or under a higher hyperbola, R x-m/n dx with the appropriate limits of integration in each case , has a second part which was not understood by Fermat s contemporaries. This second part of the Treatise is obscure and difficult to read and even the great Huygens described it as'published with many mistakes and it is so obscure (with proofs redolent of error) that I have been unable to make any sense of it'. Far from the confusion that Huygens attributes to it, in this paper we try to prove that Fermat, in writing the Treatise, had a very clear goal in mind and he managed to attain it by means of a simple and original method. Fermat reduced the quadrature of a great number of algebraic curves to the quadrature of known curves: the higher parabolas and hyperbolas of the first part of the paper. Others, he reduced to the quadrature of the circle. We shall see how the clever use of two procedures, quite novel at the time: the change of variables and a particular case of the formulaof integration by parts, provide Fermat with the necessary tools to square very easily curves as well-known as the folium of Descartes, the cissoid of Diocles or the witch of Agnesi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By means of classical Itô's calculus we decompose option prices asthe sum of the classical Black-Scholes formula with volatility parameterequal to the root-mean-square future average volatility plus a term dueby correlation and a term due to the volatility of the volatility. Thisdecomposition allows us to develop first and second-order approximationformulas for option prices and implied volatilities in the Heston volatilityframework, as well as to study their accuracy. Numerical examples aregiven.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is part of a project studying the performance of model basedestimators in a small area context. We have chosen a simple statisticalapplication in which we estimate the growth rate of accupation for severalregions of Spain. We compare three estimators: the direct one based onstraightforward results from the survey (which is unbiassed), and a thirdone which is based in a statistical model and that minimizes the mean squareerror.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the existence of moments and the tail behaviour of the densitiesof storage processes. We give sufficient conditions for existence andnon-existence of moments using the integrability conditions ofsubmultiplicative functions with respect to Lévy measures. Then, we studythe asymptotical behavior of the tails of these processes using the concaveor convex envelope of the release rate function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The well--known Minkowski's? $(x)$ function is presented as the asymptotic distribution function of an enumeration of the rationals in (0,1] based on their continued fraction representation. Besides, the singularity of ?$(x)$ is clearly proved in two ways: by exhibiting a set of measure one in which ?ï$(x)$ = 0; and again by actually finding a set of measure one which is mapped onto a set of measure zero and viceversa. These sets are described by means of metrical properties of different systems for real number representation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dominant hypothesis in the literature that studies conflict is that poverty is the main cause of civil wars. We instead analyze the effect of institutions on civil war, controlling for income per capita. In our set up, institutions are endogenous and colonial origins affect civil wars through their legacy on institutions. Our results indicate that institutions, proxied by the protection of property rights, rule of law and the efficiency of the legal system, are a fundamental cause of civil war. In particular, an improvement in institutions from the median value in the sample to the 75th percentile is associated with a 38 percentage points reduction in the incidence of civil wars. Moreover, once institutions are included as explaining civil wars, income does not have any effect on civil war, either directly or indirectly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Subcompositional coherence is a fundamental property of Aitchison s approach to compositional data analysis, and is the principal justification for using ratios of components. We maintain, however, that lack of subcompositional coherence, that is incoherence, can be measured in an attempt to evaluate whether any given technique is close enough, for all practical purposes, to being subcompositionally coherent. This opens up the field to alternative methods, which might be better suited to cope with problems such as data zeros and outliers, while being only slightly incoherent. The measure that we propose is based on the distance measure between components. We show that the two-part subcompositions, which appear to be the most sensitive to subcompositional incoherence, can be used to establish a distance matrix which can be directly compared with the pairwise distances in the full composition. The closeness of these two matrices can be quantified using a stress measure that is common in multidimensional scaling, providing a measure of subcompositional incoherence. The approach is illustrated using power-transformed correspondence analysis, which has already been shown to converge to log-ratio analysis as the power transform tends to zero.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El análisis de las regiones españolas en el período 1980-1995 indicaque la composición sectorial explica la mayor parte de la evolución del empleo y de las diferencias en productividad, salarios medios y participación de las rentas del trabajo. Para el VAB el componenteregional es más importante que el sectorial, aunque éste no esdespreciable. Nuestro análisis permite identificar a lo largo del tiempo aquellas regiones que han crecido más (menos) que lo esperado dada su composición sectorial. Identificamos una clara relación inversa entre la participación de las rentas del trabajo en el producto y el componente puramente regional del crecimiento del empleo. Sin embargo no observamos relación entre la tasa de paro y la distribución del producto. Ello sugiere que los salarios son poco elásticos a las condiciones del mercado de trabajo, pero el crecimiento del empleo sí lo es a la evolución de las rentas del capital de la región.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyze the effect of multimarket contact on the pricing behavior of pharmaceutical firms controlling for different levels of regulatory constraints using the IMS MIDAS database for the industry. Theoretically, under product differentiation, firms may find it profitable to allocate their market power among markets where they are operating, specifically from more collusive to more competitive ones. We present evidence for nine OECD countries suggesting the existence of a multimarket effect for more market friendly countries (U.S. and Canada) and less regulated ones (U.K., Germany, Netherlands), while the results are more unstable for highly regulated countries with some countries being consistent with the theory (France) while others contradicting it (Japan, Italy and Spain). A key result indicates thatin the latter countries, price constraints are so intense, that there is little room for allocating market power. Thus equilibrium prices are expected in general to be lower in regulated countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, generalizing results in Alòs, León and Vives (2007b), we see that the dependence of jumps in the volatility under a jump-diffusion stochastic volatility model, has no effect on the short-time behaviour of the at-the-money implied volatility skew, although the corresponding Hull and White formula depends on the jumps. Towards this end, we use Malliavin calculus techniques for Lévy processes based on Løkka (2004), Petrou (2006), and Solé, Utzet and Vives (2007).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Donors often rely on local intermediaries to deliver benefits to target beneficiaries. Each selected recipient observes if the intermediary under-delivers to them, so they serve as natural monitors. However, they may withhold complaints when feeling unentitled or grateful to the intermediary for selecting them. Furthermore, the intermediary may distort selection (e.g. by picking richer recipients who feel less entitled) to reduce complaints. We design an experimental game representing the donor s problem. In one treatment, the intermediary selects recipients. In the other, selection is random - as by an uninformed donor. In our data, random selection dominates delegation of the selection task to the intermediary. Selection distortions are similar, but intermediaries embezzle more when they have selection power and (correctly) expect fewer complaints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the joint visualization of two matrices which have common rowsand columns, for example multivariate data observed at two time pointsor split accord-ing to a dichotomous variable. Methods of interest includeprincipal components analysis for interval-scaled data, or correspondenceanalysis for frequency data or ratio-scaled variables on commensuratescales. A simple result in matrix algebra shows that by setting up thematrices in a particular block format, matrix sum and difference componentscan be visualized. The case when we have more than two matrices is alsodiscussed and the methodology is applied to data from the InternationalSocial Survey Program.