902 resultados para Incidental parameter bias
Resumo:
Recent attempts to incorporate optimal fiscal policy into New Keynesian models subject to nominal inertia, have tended to assume that policy makers are benevolent and have access to a commitment technology. A separate literature, on the New Political Economy, has focused on real economies where there is strategic use of policy instruments in a world of political conflict. In this paper we combine these literatures and assume that policy is set in a New Keynesian economy by one of two policy makers facing electoral uncertainty (in terms of infrequent elections and an endogenous voting mechanism). The policy makers generally share the social welfare function, but differ in their preferences over fiscal expenditure (in its size and/or composition). Given the environment, policy shall be realistically constrained to be time-consistent. In a sticky-price economy, such heterogeneity gives rise to the possibility of one policy maker utilising (nominal) debt strategically to tie the hands of the other party, and influence the outcome of any future elections. This can give rise to a deficit bias, implying a sub-optimally high level of steady-state debt, and can also imply a sub-optimal response to shocks. The steady-state distortions and inflation bias this generates, combined with the volatility induced by the electoral cycle in a sticky-price environment, can significantly
Resumo:
In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
Resumo:
This paper investigates the usefulness of switching Gaussian state space models as a tool for implementing dynamic model selecting (DMS) or averaging (DMA) in time-varying parameter regression models. DMS methods allow for model switching, where a different model can be chosen at each point in time. Thus, they allow for the explanatory variables in the time-varying parameter regression model to change over time. DMA will carry out model averaging in a time-varying manner. We compare our exact approach to DMA/DMS to a popular existing procedure which relies on the use of forgetting factor approximations. In an application, we use DMS to select different predictors in an in ation forecasting application. We also compare different ways of implementing DMA/DMS and investigate whether they lead to similar results.
Resumo:
New Keynesian models rely heavily on two workhorse models of nominal inertia - price contracts of random duration (Calvo, 1983) and price adjustment costs (Rotemberg, 1982) - to generate a meaningful role for monetary policy. These alternative descriptions of price stickiness are often used interchangeably since, to a first order of approximation they imply an isomorphic Phillips curve and, if the steady-state is efficient, identical objectives for the policy maker and as a result in an LQ framework, the same policy conclusions. In this paper we compute time-consistent optimal monetary policy in bench-mark New Keynesian models containing each form of price stickiness. Using global solution techniques we find that the inflation bias problem under Calvo contracts is significantly greater than under Rotemberg pricing, despite the fact that the former typically significant exhibits far greater welfare costs of inflation. The rates of inflation observed under this policy are non-trivial and suggest that the model can comfortably generate the rates of inflation at which the problematic issues highlighted in the trend inflation literature emerge, as well as the movements in trend inflation emphasized in empirical studies of the evolution of inflation. Finally, we consider the response to cost push shocks across both models and find these can also be significantly different. The choice of which form of nominal inertia to adopt is not innocuous.
Resumo:
This paper analyses intergenerational earnings mobility in Spain correcting for different selection biases. We address the co-residence selection problem by combining information from two samples and using the two-sample two-stage least square estimator. We find a small decrease in elasticity when we move to younger cohorts. Furthermore, we find a higher correlation in the case of daughters than in the case of sons; however, when we consider the employment selection in the case of daughters, by adopting a Heckman-type correction method, the diference between sons and daughters disappears. By decomposing the sources of earnings elasticity across generations, we find that the correlation between child's and father's occupation is the most important component. Finally, quantile regressions estimates show that the influence of the father's earnings is greater when we move to the lower tail of the offspring's earnings distribution, especially in the case of daughters' earnings.
Resumo:
In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time-varying parameter models. Our approach allows for the coefficient on each predictor to be: i) time varying, ii) constant over time or iii) shrunk to zero. The econometric methodology decides automatically which category each coefficient belongs in. Our empirical results indicate the benefits of such an approach.
Resumo:
Using a panel of 38 economies, over the period 2001 to 2010, we analyse the link between different facets of education and diversification in international portfolios. We find that university education, mathematical numeracy, in addition to financial skill, play an important role in reducing home bias. After separating countries according to their level of financial development, we find that less developed economies with more university graduates, or with higher level of mathematical numeracy, have lower level of local equity bias compared to more developed countries. We also find that the beneficial effect of education is more pronounced during the most recent financial crisis, especially for economies with less developed financial markets.
Resumo:
This paper develops a new test of true versus spurious long memory, based on log-periodogram estimation of the long memory parameter using skip-sampled data. A correction factor is derived to overcome the bias in this estimator due to aliasing. The procedure is designed to be used in the context of a conventional test of significance of the long memory parameter, and composite test procedure described that has the properties of known asymptotic size and consistency. The test is implemented using the bootstrap, with the distribution under the null hypothesis being approximated using a dependent-sample bootstrap technique to approximate short-run dependence following fractional differencing. The properties of the test are investigated in a set of Monte Carlo experiments. The procedure is illustrated by applications to exchange rate volatility and dividend growth series.
Selection bias and unobservable heterogeneity applied at the wage equation of European married women
Resumo:
This paper utilizes a panel data sample selection model to correct the selection in the analysis of longitudinal labor market data for married women in European countries. We estimate the female wage equation in a framework of unbalanced panel data models with sample selection. The wage equations of females have several potential sources of.
Resumo:
Sex-dependent selection often leads to spectacularly different phenotypes in males and females. In species in which sexual dimorphism is not complete, it is unclear which benefits females and males derive from displaying a trait that is typical of the other sex. In barn owls (Tyto alba), females exhibit on average larger black eumelanic spots than males but members of the two sexes display this trait in the same range of possible values. In a 12-year study, we show that selection exerted on spot size directly or on genetically correlated traits strongly favoured females with large spots and weakly favoured males with small spots. Intense directional selection on females caused an increase in spot diameter in the population over the study period. This increase is due to a change in the autosomal genes underlying the expression of eumelanic spots but not of sex-linked genes. Female-like males produced more daughters than sons, while male-like females produced more sons than daughters when mated to a small-spotted male. These sex ratio biases appear adaptive because sons of male-like females and daughters of female-like males had above-average survival. This demonstrates that selection exerted against individuals displaying a trait that is typical of the other sex promoted the evolution of specific life history strategies that enhance their fitness. This may explain why in many organisms sexual dimorphism is often not complete.
Resumo:
This project aims to analyse the kind of questions the teacher asks students in order to encourage them to participate in her classes. Consequently, the researcher has read relevant literature and has analysed a short excerpt of a video recorded during her first practicum. She has also analysed a number of activities carried out during her second practicum in order to find out if she had improved her questioning skills in the classroom
Resumo:
This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.
Resumo:
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.