7 resultados para requirement-based testing
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
This paper proposes unit tests based on partially adaptive estimation. The proposed tests provide an intermediate class of inference procedures that are more efficient than the traditional OLS-based methods and simpler than unit root tests based on fully adptive estimation using nonparametric methods. The limiting distribution of the proposed test is a combination of standard normal and the traditional Dickey-Fuller (DF) distribution, including the traditional ADF test as a special case when using Gaussian density. Taking into a account the well documented characteristic of heavy-tail behavior in economic and financial data, we consider unit root tests coupled with a class of partially adaptive M-estimators based on the student-t distributions, wich includes te normal distribution as a limiting case. Monte Carlo Experiments indicate that, in the presence of heavy tail distributions or innovations that are contaminated by outliers, the proposed test is more powerful than the traditional ADF test. We apply the proposed test to several macroeconomic time series that have heavy-tailed distributions. The unit root hypothesis is rejected in U.S. real GNP, supporting the literature of transitory shocks in output. However, evidence against unit roots is not found in real exchange rate and nominal interest rate even haevy-tail is taken into a account.
Resumo:
This paper develops nonparametric tests of independence between two stationary stochastic processes. The testing strategy boils down to gauging the closeness between the joint and the product of the marginal stationary densities. For that purpose, I take advantage of a generalized entropic measure so as to build a class of nonparametric tests of independence. Asymptotic normality and local power are derived using the functional delta method for kernels, whereas finite sample properties are investigated through Monte Carlo simulations.
Resumo:
Consumption is an important macroeconomic aggregate, being about 70% of GNP. Finding sub-optimal behavior in consumption decisions casts a serious doubt on whether optimizing behavior is applicable on an economy-wide scale, which, in turn, challenge whether it is applicable at all. This paper has several contributions to the literature on consumption optimality. First, we provide a new result on the basic rule-of-thumb regression, showing that it is observational equivalent to the one obtained in a well known optimizing real-business-cycle model. Second, for rule-of-thumb tests based on the Asset-Pricing Equation, we show that the omission of the higher-order term in the log-linear approximation yields inconsistent estimates when lagged observables are used as instruments. However, these are exactly the instruments that have been traditionally used in this literature. Third, we show that nonlinear estimation of a system of N Asset-Pricing Equations can be done efficiently even if the number of asset returns (N) is high vis-a-vis the number of time-series observations (T). We argue that efficiency can be restored by aggregating returns into a single measure that fully captures intertemporal substitution. Indeed, we show that there is no reason why return aggregation cannot be performed in the nonlinear setting of the Pricing Equation, since the latter is a linear function of individual returns. This forms the basis of a new test of rule-of-thumb behavior, which can be viewed as testing for the importance of rule-of-thumb consumers when the optimizing agent holds an equally-weighted portfolio or a weighted portfolio of traded assets. Using our setup, we find no signs of either rule-of-thumb behavior for U.S. consumers or of habit-formation in consumption decisions in econometric tests. Indeed, we show that the simple representative agent model with a CRRA utility is able to explain the time series data on consumption and aggregate returns. There, the intertemporal discount factor is significant and ranges from 0.956 to 0.969 while the relative risk-aversion coefficient is precisely estimated ranging from 0.829 to 1.126. There is no evidence of rejection in over-identifying-restriction tests.
Resumo:
It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.
Resumo:
This paper tests the optimality of consumption decisions at the aggregate level taking into account popular deviations from the canonical constant-relative-risk-aversion (CRRA) utility function model-rule of thumb and habit. First, based on the critique in Carroll (2001) and Weber (2002) of the linearization and testing strategies using euler equations for consumption, we provide extensive empirical evidence of their inappropriateness - a drawback for standard rule- of-thumb tests. Second, we propose a novel approach to test for consumption optimality in this context: nonlinear estimation coupled with return aggregation, where rule-of-thumb behavior and habit are special cases of an all encompassing model. We estimated 48 euler equations using GMM. At the 5% level, we only rejected optimality twice out of 48 times. Moreover, out of 24 regressions, we found the rule-of-thumb parameter to be statistically significant only twice. Hence, lack of optimality in consumption decisions represent the exception, not the rule. Finally, we found the habit parameter to be statistically significant on four occasions out of 24.
Resumo:
This paper constructs a unit root test baseei on partially adaptive estimation, which is shown to be robust against non-Gaussian innovations. We show that the limiting distribution of the t-statistic is a convex combination of standard normal and DF distribution. Convergence to the DF distribution is obtaineel when the innovations are Gaussian, implying that the traditional ADF test is a special case of the proposed testo Monte Carlo Experiments indicate that, if innovation has heavy tail distribution or are contaminated by outliers, then the proposed test is more powerful than the traditional ADF testo Nominal interest rates (different maturities) are shown to be stationary according to the robust test but not stationary according to the nonrobust ADF testo This result seems to suggest that the failure of rejecting the null of unit root in nominal interest rate may be due to the use of estimation and hypothesis testing procedures that do not consider the absence of Gaussianity in the data.Our results validate practical restrictions on the behavior of the nominal interest rate imposed by CCAPM, optimal monetary policy and option pricing models.
Resumo:
Exchange rate misalignment assessment is becoming more relevant in recent period particularly after the nancial crisis of 2008. There are di erent methodologies to address real exchange rate misalignment. The real exchange misalignment is de ned as the di erence between actual real e ective exchange rate and some equilibrium norm. Di erent norms are available in the literature. Our paper aims to contribute to the literature by showing that Behavioral Equilibrium Exchange Rate approach (BEER) adopted by Clark & MacDonald (1999), Ubide et al. (1999), Faruqee (1994), Aguirre & Calderón (2005) and Kubota (2009) among others can be improved in two following manners. The rst one consists of jointly modeling real e ective exchange rate, trade balance and net foreign asset position. The second one has to do with the possibility of explicitly testing over identifying restrictions implied by economic theory and allowing the analyst to show that these restrictions are not falsi ed by the empirical evidence. If the economic based identifying restrictions are not rejected it is also possible to decompose exchange rate misalignment in two pieces, one related to long run fundamentals of exchange rate and the other related to external account imbalances. We also discuss some necessary conditions that should be satis ed for disrcarding trade balance information without compromising exchange rate misalignment assessment. A statistical (but not a theoretical) identifying strategy for calculating exchange rate misalignment is also discussed. We illustrate the advantages of our approach by analyzing the Brazilian case. We show that the traditional approach disregard important information of external accounts equilibrium for this economy.