6 resultados para EMPIRICAL DISTRIBUTION FUNCTION

em University of Connecticut - USA


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lovell and Rouse (LR) have recently proposed a modification of the standard DEA model that overcomes the infeasibility problem often encountered in computing super-efficiency. In the LR procedure one appropriately scales up the observed input vector (scale down the output vector) of the relevant super-efficient firm thereby usually creating its inefficient surrogate. An alternative procedure proposed in this paper uses the directional distance function introduced by Chambers, Chung, and Färe and the resulting Nerlove-Luenberger (NL) measure of super-efficiency. The fact that the directional distance function combines features of both an input-oriented and an output-oriented model, generally leads to a more complete ranking of the observations than either of the oriented models. An added advantage of this approach is that the NL super-efficiency measure is unique and does not depend on any arbitrary choice of a scaling parameter. A data set on international airlines from Coelli, Perelman, and Griffel-Tatje (2002) is utilized in an illustrative empirical application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent theoretical work has examined the spatial distribution of unemployment using the efficiency wage model as the mechanism by which unemployment arises in the urban economy. This paper extends the standard efficiency wage model in order to allow for behavioral substitution between leisure time at home and effort at work. In equilibrium, residing at a location with a long commute affects the time available for leisure at home and therefore affects the trade-off between effort at work and risk of unemployment. This model implies an empirical relationship between expected commutes and labor market outcomes, which is tested using the Public Use Microdata sample of the 2000 U.S. Decennial Census. The empirical results suggest that efficiency wages operate primarily for blue collar workers, i.e. workers who tend to be in occupations that face higher levels of supervision. For this subset of workers, longer commutes imply higher levels of unemployment and higher wages, which are both consistent with shirking and leisure being substitutable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A problem frequently encountered in Data Envelopment Analysis (DEA) is that the total number of inputs and outputs included tend to be too many relative to the sample size. One way to counter this problem is to combine several inputs (or outputs) into (meaningful) aggregate variables reducing thereby the dimension of the input (or output) vector. A direct effect of input aggregation is to reduce the number of constraints. This, in its turn, alters the optimal value of the objective function. In this paper, we show how a statistical test proposed by Banker (1993) may be applied to test the validity of a specific way of aggregating several inputs. An empirical application using data from Indian manufacturing for the year 2002-03 is included as an example of the proposed test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the magnitude and timing of the effects of changes in the monetary base on the aggregate and regional changes in bank loans within the United States. We consider both Bureau of Economic Analysis (BEA) regions, and individual states and the District of Columbia for our regional analysis. The empirical analysis provides some insight on the bank-lending channel of monetary policy. We find strong evidence of a 4-quarter lag in the effect of changes in the monetary base on bank loans. That finding proves robust across all regions and nearly all states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution.