31 resultados para time varying parameter model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work evaluates empirically the Taylor rule for the US and Brazil using Kalman Filter and Markov-Switching Regimes. We show that the parameters of the rule change significantly with variations in both output and output gap proxies, considering hidden variables and states. Such conclusions call naturally for robust optimal monetary rules. We also show that Brazil and US have very contrasting parameters, first because Brazil presents time-varying intercept, second because of the rigidity in the parameters of the Brazilian Taylor rule, regardless the output gap proxy, data frequency or sample data. Finally, we show that the long-run inflation parameter of the US Taylor rule is less than one in many periods, contrasting strongly with Orphanides (forthcoming) and Clarida, Gal´i and Gertler (2000), and the same happens with Brazilian monthly data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is strong empirical evidence that risk premia in long-term interest rates are time-varying. These risk premia critically depend on interest rate volatility, yet existing research has not examined the im- pact of time-varying volatility on excess returns for long-term bonds. To address this issue, we incorporate interest rate option prices, which are very sensitive to interest rate volatility, into a dynamic model for the term structure of interest rates. We estimate three-factor affine term structure models using both swap rates and interest rate cap prices. When we incorporate option prices, the model better captures interest rate volatility and is better able to predict excess returns for long-term swaps over short-term swaps, both in- and out-of-sample. Our results indicate that interest rate options contain valuable infor- mation about risk premia and interest rate dynamics that cannot be extracted from interest rates alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho visa analisar a relação entre política monetária e persistência inflacionária no período recente, após a introdução do regime de metas de inflação no Brasil. Através de um modelo novo-keynesiano simplificado, o grau de persistência do hiato de inflação é modelado como função dos pesos da regra de política monetária. A evolução temporal da regra de Taylor é confrontada com a curva estimada de persistência do hiato de inflação, demonstrando que mudanças na condução da política monetária levam a alterações do nível de persistência inflacionária na economia. Uma adaptação do modelo, com uma regra de Taylor que incorpora expectativas do hiato do produto, chega aos mesmos resultados com maior precisão.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the period 1976-1992. We also test a conditional APT modeI by using the difference between the 3-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from individual securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be important for the appropriate pricing of the portfolios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we show that the widely used stationarity tests such as the KPSS test have power close to size in the presence of time-varying unconditional variance. We propose a new test as a complement of the existing tests. Monte Carlo experiments show that the proposed test possesses the following characteristics: (i) In the presence of unit root or a structural change in the mean, the proposed test is as powerful as the KPSS and other tests; (ii) In the presence a changing variance, the traditional tests perform badly whereas the proposed test has high power comparing to the existing tests; (iii) The proposed test has the same size as traditional stationarity tests under the null hypothesis of stationarity. An application to daily observations of return on US Dollar/Euro exchange rate reveals the existence of instability in the unconditional variance when the entire sample is considered, but stability is found in subsamples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I study the asset-pricing implications in an cnviromncnt with feedback traders and rational arbitrageurs. Feedback traders are defined as possible naive investors who buy after a raise in prices and sell after a drop in prices. I consider two types of feedback strategies: (1) short-term (SF), motivated by institutional rulcs as top-losscs and margin calls and (2) long-tcrm (LF), motivated by representativeness bias from non-sophisticated investors. Their presence in the market follows a stochastic regime swift process. Short lived assumption for the arbitrageurs prevents the correction of the misspricing generated by feedback strategies. The estimated modcl using US data suggests that the regime switching is able to capture the time varying autocorrclation of returns. The segregation of feedback types helps to identify the long term component that otherwise would not show up due to the large movements implied by the SF typc. The paper also has normativo implications for practioners since it providos a methodology to identify mispricings driven by feedback traders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O conceito de paridade coberta de juros sugere que, na ausência de barreiras para arbitragem entre mercados, o diferencial de juros entre dois ativos, idênticos em todos os pontos relevantes, com exceção da moeda de denominação, na ausência de risco de variação cambial deve ser igual a zero. Porém, uma vez que existam riscos não diversificáveis, representados pelo risco país, inerentes a economias emergentes, os investidores exigirão uma taxa de juros maior que a simples diferença entre as taxas de juros doméstica e externa. Este estudo tem por objetivo avaliar se o ajustamento das condições de paridade coberta de juros por prêmios de risco é suficiente para a validação da relação de não-arbitragem para o mercado brasileiro, durante o período de 2007 a 2010. O risco país contamina todos os ativos financeiros emitidos em uma determinada economia e pode ser descrito como a somatória do risco de default (ou risco soberano) e do risco de conversibilidade percebidos pelo mercado. Para a estimação da equação de não arbitragem foram utilizadas regressões por Mínimos Quadrados Ordinários, parâmetros variantes no tempo (TVP) e Mínimos Quadrados Recursivos, e os resultados obtidos não são conclusivos sobre a validação da relação de paridade coberta de juros, mesmo ajustando para prêmio de risco. Erros de medidas de dados, custo de transação e intervenções e políticas restritivas no mercado de câmbio podem ter contribuído para este resultado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the relationship between consumer demand and corporate performance in several consumer industries in the UK, using two independent datasets. It uses data on consumer expenditures and the retail price index to estimate Almost Ideal Demand Systems on micro-data and compute timevarying price elasticities of demand for disaggregated commodity groups. Then, it matches the product definitions to the Standard Industry Classification and uses the estimated elasticities to investigate the impact of consumer behaviour on firm-level profitability equations. The time-varying household characteristics are ideal instruments for the demand effects in the firms' supply equation. The paper concludes that demand elasticities have a significant and tangible impact on the profitability of UK firms and that this impact can shed some light on the relationship between market structure and economic performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a continuous time target zone model of speculative attacks. Contrary to most of the literature that considers the certainty case, i.e., agents know for sure the Central Bank behavior in the future, we build uncertainty into the madel in two different ways. First, we consider the case in whicb the leveI of reserves at which the central bank lets the regime collapse is uncertain. Alternatively, we ana1ize the case in which, with some probability, the government may cbange its policy reducing the initially positive trend in domestic credito In both cases, contrary to the case of a fixed exchange rate regime, speculators face a cost of launching a tentative attack that may not succeed. Such cost induces a delay and may even prevent its occurrence. At the time of the tentative attack, the exchange rate moves either discretely up, if the attack succeeds, or down, if it fails. The remlts are consistent with the fact that, typically, an attack involves substantial profits and losses for the speculators. In particular, if agents believed that the government will control fiscal imbalances in the future, or alternatively, if they believe the trend in domestic credit to be temporary, the attack is postponed even in the presence of a signal of an imminent collapse. Finally, we aIso show that the timing of a speculative attack increases with the width of the target zone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we show that the widely used stationarity tests such as the KPSS test has power close to size in the presence of time-varying unconditional variance. We propose a new test as a complement of the existing tests. Monte Carlo experiments show that the proposed test possesses the following characteristics: (i) In the presence of unit root or a structural change in the mean, the proposed test is as powerful as the KPSS and other tests; (ii) In the presence a changing variance, the traditional tests perform badly whereas the proposed test has high power comparing to the existing tests; (iii) The proposed test has the same size as traditional stationarity tests under the null hypothesis of covariance stationarity. An application to daily observations of return on US Dollar/Euro exchange rate reveals the existence of instability in the unconditional variance when the entire sample is considered, but stability is found in sub-samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lucas (2000) estimates that the US welfare costs of inflation are around 1% of GDP. This measurement is consistent with a speci…c distorting channel in terms of the Bailey triangle under the demand for monetary base schedule (outside money): the displacement of resources from the production of consumption goods to the household transaction time à la Baumol. Here, we consider also several new types of distortions in the manufacturing and banking industries. Our new evidences show that both banks and firms demand special occupational employments to avoid the inflation tax. We de…ne the concept of ”the foat labor”: The occupational employments that are aflected by the in‡ation rates. More administrative workers are hired relatively to the bluecollar workers for producing consumption goods. This new phenomenon makes the manufacturing industry more roundabout. To take into account this new stylized fact and others, we redo at same time both ”The model 5: A Banking Sector -2” formulated by Lucas (1993) and ”The Competitive Banking System” proposed by Yoshino (1993). This modelling allows us to characterize better the new types of misallocations. We …nd that the maximum value of the resources wasted by the US economy happened in the years 1980-81, after the 2nd oil shock. In these years, we estimate the excess resources that are allocated for every speci…c distorting channel: i) The US commercial banks spent additional resources of around 2% of GDP; ii) For the purpose of the firm foating time were used between 2.4% and 4.1% of GDP); and iii) For the household transaction time were allocated between 3.1% and 4.5 % of GDP. The Bailey triangle under the demand for the monetary base schedule represented around 1% of GDP, which is consistent with Lucas (2000). We estimate that the US total welfare costs of in‡ation were around 10% of GDP in terms of the consumption goods foregone. The big di¤erence between our results and Lucas (2000) are mainly due to the Harberger triangle in the market for loans (inside money) which makes part of the household transaction time, of the …rm ‡oat labor and of the distortion in the banking industry. This triangle arises due to the widening interest rates spread in the presence of a distorting inflation tax and under a fractionally reserve system. The Harberger triangle can represent 80% of the total welfare costs of inflation while the remaining percentage is split almost equally between the Bailey triangle and the resources used for the bank services. Finally, we formulate several theorems in terms of the optimal nonneutral monetary policy so as to compare with the classical monetary theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In da Costa et al. (2006) we have shown how a same pricing kernel can account for the excess returns of the S&:P500 over the US short term bond and of the uncovered over the covered trading of foreign government bonds. In this paper we estimate and test the overidentifying restrictiom; of Euler equations associated with "ix different versions of the Consumption Capital Asset Pricing I\Iodel. Our main finding is that the same (however often unreasonable) values for the parameters are estimated for ali models in both nmrkets. In most cases, the rejections or otherwise of overidentifying restrictions occurs for the two markets, suggesting that success and failure stories for the equity premium repeat themselves in foreign exchange markets. Our results corroborate the findings in da Costa et al. (2006) that indicate a strong similarity between the behavior of excess returns in the two markets when modeled as risk premiums, providing empirical grounds to believe that the proposed preference-based solutions to puzzles in domestic financiaI markets can certainly shed light on the Forward Premium Puzzle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper quantifies the effects on violence and police activity of the Pacifying Police Unit program (UPP) in Rio de Janeiro and the possible geographical spillovers caused by this policy. This program consists of taking selected shantytowns controlled by criminals organizations back to the State. The strategy of the policy is to dislodge the criminals and then settle a permanent community-oriented police station in the slum. The installation of police units in these slums can generate geographical spillover effects to other regions of the State of Rio de Janeiro. We use the interrupted time series approach proposed by Gonzalez-Navarro (2013) to address effects of a police when there is contagion of the control group and we find that criminal outcomes decrease in areas of UPP and in areas near treated regions. Furthermore, we build a model which allows to perform counterfactuals of this policy and to estimate causal effects in other areas of the State of Rio de Janeiro outside the city.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Taking into account previous research we could assume to be beneficial to diversify investments in emerging economies. We investigate in the paper International Portfolio Diversification: evidence from Emerging Markets if it still holds true, given the assumption of larger world markets integration. Our results suggest a wide spread positive time-varying correlations of emerging and developed markets. However, pair-wise cross-country correlations gave evidence that emerging markets have low integration with developed markets. Consequently, we evaluate out-of-sample performance of a portfolio with emerging equity countries, confirming the initial statement that it has a better a risk-adjusted performance over a purely developed markets portfolio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.