973 resultados para MISSING VALUE ESTIMATION


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze a standard environment of adverse selection in credit markets. In our environment,entrepreneurs who are privately informed about the quality of their projects needto borrow in order to invest. Conventional wisdom says that, in this class of economies, thecompetitive equilibrium is typically inefficient.We show that this conventional wisdom rests on one implicit assumption: entrepreneurscan only access monitored lending. If a new set of markets is added to provide entrepreneurswith additional funds, efficiency can be attained in equilibrium. An important characteristic ofthese additional markets is that lending in them must be unmonitored, in the sense that it doesnot condition total borrowing or investment by entrepreneurs. This makes it possible to attainefficiency by pooling all entrepreneurs in the new markets while separating them in the marketsfor monitored loans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes to estimate the covariance matrix of stock returnsby an optimally weighted average of two existing estimators: the samplecovariance matrix and single-index covariance matrix. This method isgenerally known as shrinkage, and it is standard in decision theory andin empirical Bayesian statistics. Our shrinkage estimator can be seenas a way to account for extra-market covariance without having to specifyan arbitrary multi-factor structure. For NYSE and AMEX stock returns from1972 to 1995, it can be used to select portfolios with significantly lowerout-of-sample variance than a set of existing estimators, includingmulti-factor models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper establishes a general framework for metric scaling of any distance measure between individuals based on a rectangular individuals-by-variables data matrix. The method allows visualization of both individuals and variables as well as preserving all the good properties of principal axis methods such as principal components and correspondence analysis, based on the singular-value decomposition, including the decomposition of variance into components along principal axes which provide the numerical diagnostics known as contributions. The idea is inspired from the chi-square distance in correspondence analysis which weights each coordinate by an amount calculated from the margins of the data table. In weighted metric multidimensional scaling (WMDS) we allow these weights to be unknown parameters which are estimated from the data to maximize the fit to the original distances. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing a matrix and displaying its rows and columns in biplots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a comparative analysis of linear and mixed modelsfor short term forecasting of a real data series with a high percentage of missing data. Data are the series of significant wave heights registered at regular periods of three hours by a buoy placed in the Bay of Biscay.The series is interpolated with a linear predictor which minimizes theforecast mean square error. The linear models are seasonal ARIMA models and themixed models have a linear component and a non linear seasonal component.The non linear component is estimated by a non parametric regression of dataversus time. Short term forecasts, no more than two days ahead, are of interestbecause they can be used by the port authorities to notice the fleet.Several models are fitted and compared by their forecasting behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an argument that explains incumbency advantage without recurring to the collective irresponsibility of legislatures. For that purpose, we exploit the informational value of incumbency: incumbency confers voters information about governing politicians not available from challengers. Because there are many reasons for high reelection rates different from incumbency status, we propose a measure of incumbency advantage that improves the use of pure reelection success. We also study the relationship between incumbency advantage and ideological and selection biases. An important implication of our analysis is that the literature linking incumbency and legislature irresponsibility most likely provides an overestimation of the latter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How much information does an auctioneer want bidders to have in a private value environment?We address this question using a novel approach to ordering information structures based on the property that in private value settings more information leads to a more disperse distribution of buyers updated expected valuations. We define the class of precision criteria following this approach and different notions of dispersion, and relate them to existing criteria of informativeness. Using supermodular precision, we obtain three results: (1) a more precise information structure yields a more efficient allocation; (2) the auctioneer provides less than the efficient level of information since more information increases bidder informational rents; (3) there is a strategic complementarity between information and competition, so that both the socially efficient and the auctioneer s optimal choice of precision increase with the number of bidders, and both converge as the number of bidders goes to infinity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We revisit the debt overhang question. We first use non-parametric techniques to isolate a panel of countries on the downward sloping section of a debt Laffer curve. In particular, overhang countries are ones where a threshold level of debt is reached in sample, beyond which (initial) debt ends up lowering (subsequent)growth. On average, significantly negative coefficients appear when debt face value reaches 60 percent of GDP or 200 percent of exports, and when its present value reaches 40 percent of GDP or 140 percent of exports. Second, we depart from reduced form growth regressions and perform direct tests of the theory on the thus selected sample of overhang countries. In the spirit of event studies, we ask whether, as overhang level of debt is reached: (i)investment falls precipitously as it should when it becomes optimal to default, (ii) economic policy deteriorates observably, as it should when debt contracts become unable to elicit effort on the part of the debtor, and (iii) the terms of borrowing worsen noticeably, as they should when it becomes optimal for creditors to pre-empt default and exact punitive interest rates. We find a systematic response of investment, particularly when property rights are weakly enforced, some worsening of the policy environment, and a fall in interest rates. This easing of borrowing conditions happens because lending by the private sector virtually disappears in overhang situations, and multilateral agencies step in with concessional rates. Thus, while debt relief is likely to improve economic policy (and especially investment) in overhang countries, it is doubtful that it would ease their terms of borrowing, or the burden of debt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A class of composite estimators of small area quantities that exploit spatial (distancerelated)similarity is derived. It is based on a distribution-free model for the areas, but theestimators are aimed to have optimal design-based properties. Composition is applied alsoto estimate some of the global parameters on which the small area estimators depend.It is shown that the commonly adopted assumption of random effects is not necessaryfor exploiting the similarity of the districts (borrowing strength across the districts). Themethods are applied in the estimation of the mean household sizes and the proportions ofsingle-member households in the counties (comarcas) of Catalonia. The simplest version ofthe estimators is more efficient than the established alternatives, even though the extentof spatial similarity is quite modest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We set up a dynamic model of firm investment in which liquidity constraintsenter explicity into the firm's maximization problem. The optimal policyrules are incorporated into a maximum likelihood procedure which estimatesthe structural parameters of the model. Investment is positively related tothe firm's internal financial position when the firm is relatively poor. This relationship disappears for wealthy firms, which can reach theirdesired level of investment. Borrowing is an increasing function of financial position for poor firms. This relationship is reversed as a firm's financial position improves, and large firms hold little debt.Liquidity constrained firms may be unused credits lines and the capacity toinvest further if they desire. However the fear that liquidity constraintswill become binding in the future induces them to invest only when internalresources increase.We estimate the structural parameters of the model and use them to quantifythe importance of liquidity constraints on firms' investment. We find thatliquidity constraints matter significantly for the investment decisions of firms. If firms can finance investment by issuing fresh equity, rather than with internal funds or debt, average capital stock is almost 35% higher overa period of 20 years. Transitory shocks to internal funds have a sustained effect on the capital stock. This effect lasts for several periods and ismore persistent for small firms than for large firms. A 10% negative shock to firm fundamentals reduces the capital stock of firms which face liquidityconstraints by almost 8% over a period as opposed to only 3.5% for firms which do not face these constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the value of connections between German industry andthe Nazi movement in early 1933. Drawing on previously unused contemporarysources about management and supervisory board composition and stock returns,we find that one out of seven firms, and a large proportion of the biggest companies,had substantive links with the National Socialist German Workers Party. Firmssupporting the Nazi movement experienced unusually high returns, outperformingunconnected ones by 5% to 8% between January and March 1933. These resultsare not driven by sectoral composition and are robust to alternative estimatorsand definitions of affiliation.