46 resultados para misspecification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the effects of extracting monetary policy disturbances with semi-structural and structural VARs, using data generated bya limited participation model under partial accommodative and feedback rules. We find that, in general, misspecification is substantial: short run coefficients often have wrong signs; impulse responses and variance decompositions give misleadingrepresentations of the dynamics. Explanations for the results and suggestions for macroeconomic practice are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most cases of cost overruns in public procurement are related to important changes in the initial project design. This paper deals with the problem of design specification in public procurement and provides a rationale for design misspecification. We propose a model in which the sponsor decides how much to invest in design specification and awards competitively the project to a contractor. After the project has been awarded the sponsor engages in bilateral renegotiation with the contractor, in order to accommodate changes in the initial project s design that new information makes desirable. When procurement takes place in the presence of horizontally differentiated contractors, the design s specification level is seen to affect the resulting degree of competition. The paper highlights this interaction between market competition and design specification and shows that the sponsor s optimal strategy, when facing an imperfectly competitive market supply, is to underinvest in design specification so as to make significant cost overruns likely. Since no such misspecification occurs in a perfectly competitive market, cost overruns are seen to arise as a consequence of lack of competition in the procurement market.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Model misspecification affects the classical test statistics used to assess the fit of the Item Response Theory (IRT) models. Robust tests have been derived under model misspecification, as the Generalized Lagrange Multiplier and Hausman tests, but their use has not been largely explored in the IRT framework. In the first part of the thesis, we introduce the Generalized Lagrange Multiplier test to detect differential item response functioning in IRT models for binary data under model misspecification. By means of a simulation study and a real data analysis, we compare its performance with the classical Lagrange Multiplier test, computed using the Hessian and the cross-product matrix, and the Generalized Jackknife Score test. The power of these tests is computed empirically and asymptotically. The misspecifications considered are local dependence among items and non-normal distribution of the latent variable. The results highlight that, under mild model misspecification, all tests have good performance while, under strong model misspecification, the performance of the tests deteriorates. None of the tests considered show an overall superior performance than the others. In the second part of the thesis, we extend the Generalized Hausman test to detect non-normality of the latent variable distribution. To build the test, we consider a seminonparametric-IRT model, that assumes a more flexible latent variable distribution. By means of a simulation study and two real applications, we compare the performance of the Generalized Hausman test with the M2 limited information goodness-of-fit test and the Likelihood-Ratio test. Additionally, the information criteria are computed. The Generalized Hausman test has a better performance than the Likelihood-Ratio test in terms of Type I error rates and the M2 test in terms of power. The performance of the Generalized Hausman test and the information criteria deteriorates when the sample size is small and with a few items.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis deals with the problem of Model Selection (MS) motivated by information and prediction theory, focusing on parametric time series (TS) models. The main contribution of the thesis is the extension to the multivariate case of the Misspecification-Resistant Information Criterion (MRIC), a criterion introduced recently that solves Akaike’s original research problem posed 50 years ago, which led to the definition of the AIC. The importance of MS is witnessed by the huge amount of literature devoted to it and published in scientific journals of many different disciplines. Despite such a widespread treatment, the contributions that adopt a mathematically rigorous approach are not so numerous and one of the aims of this project is to review and assess them. Chapter 2 discusses methodological aspects of MS from information theory. Information criteria (IC) for the i.i.d. setting are surveyed along with their asymptotic properties; and the cases of small samples, misspecification, further estimators. Chapter 3 surveys criteria for TS. IC and prediction criteria are considered for: univariate models (AR, ARMA) in the time and frequency domain, parametric multivariate (VARMA, VAR); nonparametric nonlinear (NAR); and high-dimensional models. The MRIC answers Akaike’s original question on efficient criteria, for possibly-misspecified (PM) univariate TS models in multi-step prediction with high-dimensional data and nonlinear models. Chapter 4 extends the MRIC to PM multivariate TS models for multi-step prediction introducing the Vectorial MRIC (VMRIC). We show that the VMRIC is asymptotically efficient by proving the decomposition of the MSPE matrix and the consistency of its Method-of-Moments Estimator (MoME), for Least Squares multi-step prediction with univariate regressor. Chapter 5 extends the VMRIC to the general multiple regressor case, by showing that the MSPE matrix decomposition holds, obtaining consistency for its MoME, and proving its efficiency. The chapter concludes with a digression on the conditions for PM VARX models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genetic research on risk of alcohol, tobacco or drug dependence must make allowance for the partial overlap of risk-factors for initiation of use, and risk-factors for dependence or other outcomes in users. Except in the extreme cases where genetic and environmental risk-factors for initiation and dependence overlap completely or are uncorrelated, there is no consensus about how best to estimate the magnitude of genetic or environmental correlations between Initiation and Dependence in twin and family data. We explore by computer simulation the biases to estimates of genetic and environmental parameters caused by model misspecification when Initiation can only be defined as a binary variable. For plausible simulated parameter values, the two-stage genetic models that we consider yield estimates of genetic and environmental variances for Dependence that, although biased, are not very discrepant from the true values. However, estimates of genetic (or environmental) correlations between Initiation and Dependence may be seriously biased, and may differ markedly under different two-stage models. Such estimates may have little credibility unless external data favor selection of one particular model. These problems can be avoided if Initiation can be assessed as a multiple-category variable (e.g. never versus early-onset versus later onset user), with at least two categories measurable in users at risk for dependence. Under these conditions, under certain distributional assumptions., recovery of simulated genetic and environmental correlations becomes possible, Illustrative application of the model to Australian twin data on smoking confirmed substantial heritability of smoking persistence (42%) with minimal overlap with genetic influences on initiation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finite-sample properties of the strategy and tests are examined by simulation. The modelling strategy is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyzes the persistence of shocks that affect the real exchange rates for a panel of seventeen OECD developed countries during the post-Bretton Woods era. The adoption of a panel data framework allows us to distinguish two different sources of shocks, i.e. the idiosyncratic and the common shocks, each of which may have di¤erent persistence patterns on the real exchange rates. We first investigate the stochastic properties of the panel data set using panel stationarity tests that simultaneously consider both the presence of cross-section dependence and multiple structural breaks that have not received much attention in previous persistence analyses. Empirical results indicate that real exchange rates are non-stationary when the analysis does not account for structural breaks, although this conclusion is reversed when they are modeled. Consequently, misspecification errors due to the non-consideration of structural breaks leads to upward biased shocks' persistence measures. The persistence measures for the idiosyncratic and common shocks have been estimated in this paper always turn out to be less than one year.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the implications for monetary policy of heterogeneous expectations in a New Keynesian model. The assumption of rational expectations is replaced with parsimonious forecasting models where agents select between predictors that are underparameterized. In a Misspecification Equilibrium agents only select the best-performing statistical models. We demonstrate that, even when monetary policy rules satisfy the Taylor principle by adjusting nominal interest rates more than one for one with inflation, there may exist equilibria with Intrinsic Heterogeneity. Under certain conditions, there may exist multiple misspecification equilibria. We show that these findings have important implications for business cycle dynamics and for the design of monetary policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust decision making implies welfare costs or robustness premia when the approximating model is the true data generating process. To examine the importance of these premia at the aggregate level we employ a simple two-sector dynamic general equilibrium model with human capital and introduce an additional form of precautionary behavior. The latter arises from the robust decision maker s ability to reduce the effects of model misspecification through allocating time and existing human capital to this end. We find that the extent of the robustness premia critically depends on the productivity of time relative to that of human capital. When the relative efficiency of time is low, despite transitory welfare costs, there are gains from following robust policies in the long-run. In contrast, high relative productivity of time implies misallocation costs that remain even in the long-run. Finally, depending on the technology used to reduce model uncertainty, we fi nd that while increasing the fear of model misspecfi cation leads to a net increase in precautionary behavior, investment and output can fall.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Empirical researchers interested in how governance shapes various aspects of economic development frequently use the Worldwide Governance indicators (WGI). These variables come in the form of an estimate along with a standard error reflecting the uncertainty of this estimate. Existing empirical work simply uses the estimates as an explanatory variable and discards the information provided by the standard errors. In this paper, we argue that the appropriate practice should be to take into account the uncertainty around the WGI estimates through the use of multiple imputation. We investigate the importance of our proposed approach by revisiting in three applications the results of recently published studies. These applications cover the impact of governance on (i) capital flows; (ii) international trade; (iii) income levels around the world. We generally find that the estimated effects of governance are highly sensitive to the use of multiple imputation. We also show that model misspecification is a concern for the results of our reference studies. We conclude that the effects of governance are hard to establish once we take into account uncertainty around both the WGI estimates and the correct model specification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We extend PML theory to account for information on the conditional moments up to order four, but without assuming a parametric model, to avoid a risk of misspecification of the conditional distribution. The key statistical tool is the quartic exponential family, which allows us to generalize the PML2 and QGPML1 methods proposed in Gourieroux et al. (1984) to PML4 and QGPML2 methods, respectively. An asymptotic theory is developed. The key numerical tool that we use is the Gauss-Freud integration scheme that solves a computational problem that has previously been raised in several fields. Simulation exercises demonstrate the feasibility and robustness of the methods [Authors]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a method to evaluate cyclical models which does not require knowledge of the DGP and the exact empirical specification of the aggregate decision rules. We derive robust restrictions in a class of models; use some to identify structural shocks and others to evaluate the model or contrast sub-models. The approach has good size and excellent power properties, even in small samples. We show how to examine the validity of a class of models, sort out the relevance of certain frictions, evaluate the importance of an added feature, and indirectly estimate structural parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method to estimate DSGE models using the raw data is proposed. The approachlinks the observables to the model counterparts via a flexible specification which doesnot require the model-based component to be solely located at business cycle frequencies,allows the non model-based component to take various time series patterns, andpermits model misspecification. Applying standard data transformations induce biasesin structural estimates and distortions in the policy conclusions. The proposed approachrecovers important model-based features in selected experimental designs. Twowidely discussed issues are used to illustrate its practical use.