85 resultados para Quantitative estimates


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A biplot, which is the multivariate generalization of the two-variable scatterplot, can be used to visualize the results of many multivariate techniques, especially those that are based on the singular value decomposition. We consider data sets consisting of continuous-scale measurements, their fuzzy coding and the biplots that visualize them, using a fuzzy version of multiple correspondence analysis. Of special interest is the way quality of fit of the biplot is measured, since it is well-known that regular (i.e., crisp) multiple correspondence analysis seriously under-estimates this measure. We show how the results of fuzzy multiple correspondence analysis can be defuzzified to obtain estimated values of the original data, and prove that this implies an orthogonal decomposition of variance. This permits a measure of fit to be calculated in the familiar form of a percentage of explained variance, which is directly comparable to the corresponding fit measure used in principal component analysis of the original data. The approach is motivated initially by its application to a simulated data set, showing how the fuzzy approach can lead to diagnosing nonlinear relationships, and finally it is applied to a real set of meteorological data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A tool for user choice of the local bandwidth function for a kernel density estimate is developed using KDE, a graphical object-oriented package for interactive kernel density estimation written in LISP-STAT. The bandwidth function is a cubic spline, whose knots are manipulated by the user in one window, while the resulting estimate appears in another window. A real data illustration of this method raises concerns, because an extremely large family of estimates is available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of existing studies have concluded that risk sharing allocations supported by competitive, incomplete markets equilibria are quantitatively close to first-best. Equilibrium asset prices in these models have been difficult to distinguish from those associated with a complete markets model, the counterfactual features of which have been widely documented. This paper asks if life cycle considerations, in conjunction with persistent idiosyncratic shocks which become more volatile during aggregate downturns, can reconcile the quantitative properties of the competitive asset pricing framework with those of observed asset returns. We begin by arguing that data from the Panel Study on Income Dynamics support the plausibility of such a shock process. Our estimates suggest a high degree of persistence as well as a substantial increase in idiosyncratic conditional volatility coincident with periods of low growth in U.S. GNP. When these factors are incorporated in a stationary overlapping generations framework, the implications for the returns on risky assets are substantial. Plausible parameterizations of our economy are able to generate Sharpe ratios which match those observed in U.S. data. Our economy cannot, however, account for the level of variability of stock returns, owing in large part to the specification of its production technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the prevalence of smoking has decreased to below 20%, health practitioners interest has shifted towards theprevalence of obesity, and reducing it is one of the major health challenges in decades to come. In this paper westudy the impact that the final product of the anti-smoking campaign, that is, smokers quitting the habit, had onaverage weight in the population. To these ends, we use data from the Behavioral Risk Factors Surveillance System,a large series of independent representative cross-sectional surveys. We construct a synthetic panel that allows us tocontrol for unobserved heterogeneity and we exploit the exogenous changes in taxes and regulations to instrumentthe endogenous decision to give up the habit of smoking. Our estimates, are very close to estimates issued in the 90sby the US Department of Health, and indicate that a 10% decrease in the incidence of smoking leads to an averageweight increase of 2.2 to 3 pounds, depending on choice of specification. In addition, we find evidence that the effectovershoots in the short run, although a significant part remains even after two years. However, when we split thesample between men and women, we only find a significant effect for men. Finally, the implicit elasticity of quittingsmoking to the probability of becoming obese is calculated at 0.58. This implies that the net benefit from reducingthe incidence of smoking by 1% is positive even though the cost to society is $0.6 billions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The World Health Organization estimates that 300 million clinical cases of malaria occur annually and observed that during the 80's and part of the 90's its incidence increased. In this paper we explore the influence of refugees from civil wars on the incidence of malaria in the refugee-receiving countries. Using civil wars as an instrumental variable we show that for each 1,000 refugees there are between 2,000 and 2,700 cases of malaria in the refugee receiving country. On average 13% of the cases of malaria reported by the WHO are caused by forced migration as a consequence of civil wars.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Four general equilibrium search models are compared quantitatively. Thebaseline framework is a calibrated macroeconomic model of the US economydesigned for a welfare analysis of unemployment insurance policy. Theother models make three simple and natural specification changes,regarding tax incidence, monopsony power in wage determination, and therelevant threat point. These specification changes have a major impacton the equilibrium and on the welfare implications of unemploymentinsurance, partly because search externalities magnify the effects ofwage changes. The optimal level of unemployment insurance dependsstrongly on whether raising benefits has a larger impact on searcheffort or on hiring expenditure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the comparative performance of five small areaestimators. We use Monte Carlo simulation in the context of boththeoretical and empirical populations. In addition to the direct andindirect estimators, we consider the optimal composite estimator withpopulation weights, and two composite estimators with estimatedweights: one that assumes homogeneity of within area variance andsquare bias, and another one that uses area specific estimates ofvariance and square bias. It is found that among the feasibleestimators, the best choice is the one that uses area specificestimates of variance and square bias.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In absence of comparable macroeconomic indicators for most of the Latin American economiesbeyond the 1930s, this paper presents an estimate of the apparent consumption per head of coal and petroleum for 25 countries of Latin American and the Caribbean for the year 1925, doubling the number of countries for which energy consumption estimates were previously available. Energy consumption is then used as an indicator of economic modernisation. As a result, the paper provides the basis for a quantitative comparative analysis of modernisation performance beyond the few countries for which historical national accounts are available in Latin America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of health economics works require patient cost estimates as a basic information input.However the accuracy of cost estimates remains in general unspecified. We propose to investigate howthe allocation of indirect costs or overheads can affect the estimation of patient costs in order to allow forimprovements in the analysis of patient costs estimates. Instead of focusing on the costing method, thispaper proposes to highlight changes in variance explained observed when a methodology is chosen. Wecompare three overhead allocation methods for a specific Spanish population adjusted using the ClinicalRisk Groups (CRG), and we obtain different series of full-cost group estimates. As a result, there aresignificant gains in the proportion of the variance explained, depending upon the methodology used.Furthermore, we find that the global amount of variation explained by risk adjustment models dependsmainly on direct costs and is independent of the level of aggregation used in the classification system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we test for the hysteresis versus the natural rate hypothesis on the unemployment rates of the EU new members using unit root tests that account for the presence of level shifts. As a by product, the analysis proceeds to the estimation of a NAIRU measure from a univariate point of view. The paper also focuses on the precision of these NAIRU estimates studying the two sources of inaccuracy that derive from the break points estimation and the autoregressive parameters estimation. The results point to the existence of up to four structural breaks in the transition countries NAIRU that can be associated with institutional changes implementing market-oriented reforms. Moreover, the degree of persistence in unemployment varies dramatically among the individual countries depending on the stage reached in the transition process

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gas sensing systems based on low-cost chemical sensor arrays are gaining interest for the analysis of multicomponent gas mixtures. These sensors show different problems, e.g., nonlinearities and slow time-response, which can be partially solved by digital signal processing. Our approach is based on building a nonlinear inverse dynamic system. Results for different identification techniques, including artificial neural networks and Wiener series, are compared in terms of measurement accuracy.