90 resultados para call-off order
Resumo:
Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.
Resumo:
In this paper we offer the first large sample evidence on the availability and usage ofcredit lines in U.S. public corporations and use it to re-examine the existing findings oncorporate liquidity. We show that the availability of credit lines is widespread and thataverage undrawn credit is of the same order of magnitude as cash holdings. We test thetrade-off theory of liquidity according to which firms target an optimum level of liquidity,computed as the sum of cash and undrawn credit lines. We provide support for the existenceof a liquidity target, but also show that the reasons why firms hold cash and credit linesare very different. While the precautionary motive explains well cash holdings, the optimumlevel of credit lines appears to be driven by the restrictions imposed by the credit line itself,in terms of stated purpose and covenants. In support to these findings, credit line drawdownsare associated with capital expenditures, acquisitions, and working capital.
Resumo:
The Person Trade-Off (PTO) is a methodology aimed at measuring thesocial value of health states. The rest of methodologies would measure individualutility and would be less appropriate for taking resource allocation decisions.However few studies have been conducted to test the validity of the method.We present a pilot study with this objective. The study is based on theresult of interviews to 30 undergraduate students in Economics. We judgethe validity of PTO answers by their adequacy to three hypothesis of rationality.First, we show that, given certain rationality assumptions, PTO answersshould be predicted from answers to Standard Gamble questions. This firsthypothesis is not verified. The second hypothesis is that PTO answersshould not vary with different frames of equivalent PTO questions. Thissecond hypothesis is also not verified. Our third hypothesis is that PTOvalues should predict social preferences for allocating resources betweenpatients. This hypothesis is verified. The evidence on the validity of themethod is then conflicting.
Resumo:
In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.
Resumo:
Most central banks perceive a trade-off between stabilizing inflation and stabilizing the gap between output and desired output. However, the standard new Keynesian framework implies no such trade-off. In that framework, stabilizing inflation is equivalent to stabilizing the welfare-relevant output gap. In this paper, we argue that this property of the new Keynesian framework, which we call the divine coincidence, is due to a special feature of the model: the absence of non trivial real imperfections.We focus on one such real imperfection, namely, real wage rigidities. When the baseline new Keynesian model is extended to allow for real wage rigidities, the divine coincidence disappears, and central banks indeed face a trade-off between stabilizing inflation and stabilizing the welfare-relevant output gap. We show that not only does the extended model have more realistic normative implications, but it also has appealing positive properties. In particular, it provides a natural interpretation for the dynamic inflation-unemployment relation found in the data.
Resumo:
A `next' operator, s, is built on the set R1=(0,1]-{ 1-1/e} defining a partial order that, with the help of the axiom of choice, can be extended to a total order in R1. Besides, the orbits {sn(a)}nare all dense in R1 and are constituted by elements of the samearithmetical character: if a is an algebraic irrational of degreek all the elements in a's orbit are algebraic of degree k; if a istranscendental, all are transcendental. Moreover, the asymptoticdistribution function of the sequence formed by the elements in anyof the half-orbits is a continuous, strictly increasing, singularfunction very similar to the well-known Minkowski's ?(×) function.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
We argue that during the crystallization of common and civil law in the 19th century, the optimal degree of discretion in judicial rulemaking, albeit influenced by the comparative advantages of both legislative and judicial rulemaking, was mainly determined by the anti-market biases of the judiciary. The different degrees of judicial discretion adopted in both legal traditions were thus optimally adapted to different circumstances, mainly rooted in the unique, market-friendly, evolutionary transition enjoyed by English common law as opposed to the revolutionary environment of the civil law. On the Continent, constraining judicial discretion was essential for enforcing freedom of contract and establishing a market economy. The ongoing debasement of pro-market fundamentals in both branches of the Western legal system is explained from this perspective as a consequence of increased perceptions of exogenous risks and changes in the political system, which favored the adoption of sharing solutions and removed the cognitive advantage of parliaments and political leaders.
Resumo:
This paper tests the internal consistency of time trade-off utilities.We find significant violations of consistency in the direction predictedby loss aversion. The violations disappear for higher gauge durations.We show that loss aversion can also explain that for short gaugedurations time trade-off utilities exceed standard gamble utilities. Ourresults suggest that time trade-off measurements that use relativelyshort gauge durations, like the widely used EuroQol algorithm(Dolan 1997), are affected by loss aversion and lead to utilities thatare too high.
Resumo:
This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.
Resumo:
The remarkable growth of older population has moved long term care to the front ranks of the social policy agenda. Understanding the factors that determine the type and amount of formal care is important for predicting use in the future and developing long-term policy. In this context we jointly analyze the choice of care (formal, informal, both together or none) as well as the number of hours of care received. Given that the number of hours of care is not independent of the type of care received, we estimate, for the first time in this area of research, a sample selection model with the particularity that the first step is a multinomial logit model. With regard to the debate about complementarity or substitutability between formal and informal care, our results indicate that formal care acts as a reinforcement of the family care in certain cases: for very old care receivers, in those cases in which the individual has multiple disabilities, when many care hours are provided, and in case of mental illness and/or dementia. There exist substantial differences in long term care addressed to younger and older dependent people and dependent women are in risk of becoming more vulnerable to the shortage of informal caregivers in the future. Finally, we have documented that there are great disparities in the availability of public social care across regions.
Resumo:
This paper extends the theory of network competition betweentelecommunications operators by allowing receivers to derive a surplusfrom receiving calls (call externality) and to affect the volume ofcommunications by hanging up (receiver sovereignty). We investigate theextent to which receiver charges can lead to an internalization of thecalling externality. When the receiver charge and the termination(access) charge are both regulated, there exists an e±cient equilibrium.Effciency requires a termination discount. When reception charges aremarket determined, it is optimal for each operator to set the prices foremission and reception at their off-net costs. For an appropriately chosentermination charge, the symmetric equilibrium is again effcient. Lastly,we show that network-based price discrimination creates strong incentivesfor connectivity breakdowns, even between equal networks.
Resumo:
In models where privately informed agents interact, agents may need to formhigher order expectations, i.e. expectations of other agents' expectations. This paper develops a tractable framework for solving and analyzing linear dynamic rational expectationsmodels in which privately informed agents form higher order expectations. The frameworkis used to demonstrate that the well-known problem of the infinite regress of expectationsidentified by Townsend (1983) can be approximated to an arbitrary accuracy with a finitedimensional representation under quite general conditions. The paper is constructive andpresents a fixed point algorithm for finding an accurate solution and provides weak conditions that ensure that a fixed point exists. To help intuition, Singleton's (1987) asset pricingmodel with disparately informed traders is used as a vehicle for the paper.