14 resultados para Test data
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
This work aims to compare the forecast efficiency of different types of methodologies applied to Brazilian Consumer inflation (IPCA). We will compare forecasting models using disaggregated and aggregated data over twelve months ahead. The disaggregated models were estimated by SARIMA and will have different levels of disaggregation. Aggregated models will be estimated by time series techniques such as SARIMA, state-space structural models and Markov-switching. The forecasting accuracy comparison will be made by the selection model procedure known as Model Confidence Set and by Diebold-Mariano procedure. We were able to find evidence of forecast accuracy gains in models using more disaggregated data
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.
Resumo:
This paper develops a framework to test whether discrete-valued irregularly-spaced financial transactions data follow a subordinated Markov process. For that purpose, we consider a specific optional sampling in which a continuous-time Markov process is observed only when it crosses some discrete level. This framework is convenient for it accommodates not only the irregular spacing of transactions data, but also price discreteness. Further, it turns out that, under such an observation rule, the current price duration is independent of previous price durations given the current price realization. A simple nonparametric test then follows by examining whether this conditional independence property holds. Finally, we investigate whether or not bid-ask spreads follow Markov processes using transactions data from the New York Stock Exchange. The motivation lies on the fact that asymmetric information models of market microstructures predict that the Markov property does not hold for the bid-ask spread. The results are mixed in the sense that the Markov assumption is rejected for three out of the five stocks we have analyzed.
Resumo:
It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.
Resumo:
To assess the quality of school education, much of educational research is concerned with comparisons of test scores means or medians. In this paper, we shift this focus and explore test scores data by addressing some often neglected questions. In the case of Brazil, the mean of test scores in Math for students of the fourth grade has declined approximately 0,2 standard deviation in the late 1990s. But what about changes in the distribution of scores? It is unclear whether the decline was caused by deterioration in student performance in upper and/or lower tails of the distribution. To answer this question, we propose the use of the relative distribution method developed by Handcock and Morris (1999). The advantage of this methodology is that it compares two distributions of test scores data through a single distribution and synthesizes all the differences between them. Moreover, it is possible to decompose the total difference between two distributions in a level effect (changes in median) and shape effect (changes in shape of the distribution). We find that the decline of average-test scores is mainly caused by a worsening in the position of all students throughout the distribution of scores and is not only specific to any quantile of distribution.
Resumo:
We build a stochastic discount factor—SDF— using information on US domestic financial data only, and provide evidence that it accounts for foreign markets stylized facts that escape SDF’s generated by consumption based models. By interpreting our SDF as the projection of the pricing kernel from a fully specified model in the space of returns, our results indicate that a model that accounts for the behavior of domestic assets goes a long way toward accounting for the behavior of foreign assets prices. In our tests, we address predictability, a defining feature of the Forward Premium Puzzle—FPP— by using instruments that are known to forecast excess returns in the moments restrictions associated with Euler equations both in the equity and the foreign markets.
Resumo:
The objective of this paper is to test for optimality of consumption decisions at the aggregate level (representative consumer) taking into account popular deviations from the canonical CRRA utility model rule of thumb and habit. First, we show that rule-of-thumb behavior in consumption is observational equivalent to behavior obtained by the optimizing model of King, Plosser and Rebelo (Journal of Monetary Economics, 1988), casting doubt on how reliable standard rule-of-thumb tests are. Second, although Carroll (2001) and Weber (2002) have criticized the linearization and testing of euler equations for consumption, we provide a deeper critique directly applicable to current rule-of-thumb tests. Third, we show that there is no reason why return aggregation cannot be performed in the nonlinear setting of the Asset-Pricing Equation, since the latter is a linear function of individual returns. Fourth, aggregation of the nonlinear euler equation forms the basis of a novel test of deviations from the canonical CRRA model of consumption in the presence of rule-of-thumb and habit behavior. We estimated 48 euler equations using GMM, with encouraging results vis-a-vis the optimality of consumption decisions. At the 5% level, we only rejected optimality twice out of 48 times. Empirical-test results show that we can still rely on the canonical CRRA model so prevalent in macroeconomics: out of 24 regressions, we found the rule-of-thumb parameter to be statistically signi cant at the 5% level only twice, and the habit ƴ parameter to be statistically signi cant on four occasions. The main message of this paper is that proper return aggregation is critical to study intertemporal substitution in a representative-agent framework. In this case, we fi nd little evidence of lack of optimality in consumption decisions, and deviations of the CRRA utility model along the lines of rule-of-thumb behavior and habit in preferences represent the exception, not the rule.
Resumo:
This paper tests the optimality of consumption decisions at the aggregate level taking into account popular deviations from the canonical constant-relative-risk-aversion (CRRA) utility function model-rule of thumb and habit. First, based on the critique in Carroll (2001) and Weber (2002) of the linearization and testing strategies using euler equations for consumption, we provide extensive empirical evidence of their inappropriateness - a drawback for standard rule- of-thumb tests. Second, we propose a novel approach to test for consumption optimality in this context: nonlinear estimation coupled with return aggregation, where rule-of-thumb behavior and habit are special cases of an all encompassing model. We estimated 48 euler equations using GMM. At the 5% level, we only rejected optimality twice out of 48 times. Moreover, out of 24 regressions, we found the rule-of-thumb parameter to be statistically significant only twice. Hence, lack of optimality in consumption decisions represent the exception, not the rule. Finally, we found the habit parameter to be statistically significant on four occasions out of 24.
Resumo:
This paper constructs a unit root test baseei on partially adaptive estimation, which is shown to be robust against non-Gaussian innovations. We show that the limiting distribution of the t-statistic is a convex combination of standard normal and DF distribution. Convergence to the DF distribution is obtaineel when the innovations are Gaussian, implying that the traditional ADF test is a special case of the proposed testo Monte Carlo Experiments indicate that, if innovation has heavy tail distribution or are contaminated by outliers, then the proposed test is more powerful than the traditional ADF testo Nominal interest rates (different maturities) are shown to be stationary according to the robust test but not stationary according to the nonrobust ADF testo This result seems to suggest that the failure of rejecting the null of unit root in nominal interest rate may be due to the use of estimation and hypothesis testing procedures that do not consider the absence of Gaussianity in the data.Our results validate practical restrictions on the behavior of the nominal interest rate imposed by CCAPM, optimal monetary policy and option pricing models.
Resumo:
This paper gives a first step toward a methodology to quantify the influences of regulation on short-run earnings dynamics. It also provides evidence on the patterns of wage adjustment adopted during the recent high inflationary experience in Brazil.The large variety of official wage indexation rules adopted in Brazil during the recent years combined with the availability of monthly surveys on labor markets makes the Brazilian case a good laboratory to test how regulation affects earnings dynamics. In particular, the combination of large sample sizes with the possibility of following the same worker through short periods of time allows to estimate the cross-sectional distribution of longitudinal statistics based on observed earnings (e.g., monthly and annual rates of change).The empirical strategy adopted here is to compare the distributions of longitudinal statistics extracted from actual earnings data with simulations generated from minimum adjustment requirements imposed by the Brazilian Wage Law. The analysis provides statistics on how binding were wage regulation schemes. The visual analysis of the distribution of wage adjustments proves useful to highlight stylized facts that may guide future empirical work.
Resumo:
There are four different hypotheses analyzed in the literature that explain deunionization, namely: the decrease in the demand for union representation by the workers; the impaet of globalization over unionization rates; teehnieal ehange and ehanges in the legal and politieal systems against unions. This paper aims to test alI ofthem. We estimate a logistie regression using panel data proeedure with 35 industries from 1973 to 1999 and eonclude that the four hypotheses ean not be rejeeted by the data. We also use a varianee analysis deeomposition to study the impaet of these variables over the drop in unionization rates. In the model with no demographic variables the results show that these economic (tested) variables can account from 10% to 12% of the drop in unionization. However, when we include demographic variables these tested variables can account from 10% to 35% in the total variation of unionization rates. In this case the four hypotheses tested can explain up to 50% ofthe total drop in unionization rates explained by the model.
Resumo:
In this work we focus on tests for the parameter of an endogenous variable in a weakly identi ed instrumental variable regressionmodel. We propose a new unbiasedness restriction for weighted average power (WAP) tests introduced by Moreira and Moreira (2013). This new boundary condition is motivated by the score e ciency under strong identi cation. It allows reducing computational costs of WAP tests by replacing the strongly unbiased condition. This latter restriction imposes, under the null hypothesis, the test to be uncorrelated to a given statistic with dimension given by the number of instruments. The new proposed boundary condition only imposes the test to be uncorrelated to a linear combination of the statistic. WAP tests under both restrictions to perform similarly numerically. We apply the di erent tests discussed to an empirical example. Using data from Yogo (2004), we assess the e ect of weak instruments on the estimation of the elasticity of inter-temporal substitution of a CCAPM model.