913 resultados para Relative errors


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Building Risk-Neutral Densities (RND) from options data can provide market-implied expectations about the future behavior of a financial variable. And market expectations on financial variables may influence macroeconomic policy decisions. It can be useful also for corporate and financial institutions decision making. This paper uses the Liu et all (2007) approach to estimate the option-implied Risk-neutral densities from the Brazilian Real/US Dollar exchange rate distribution. We then compare the RND with actual exchange rates, on a monthly basis, in order to estimate the relative risk-aversion of investors and also obtain a Real-world density for the exchange rate. We are the first to calculate relative risk-aversion and the option-implied Real World Density for an emerging market currency. Our empirical application uses a sample of Brazilian Real/US Dollar options traded at BM&F-Bovespa from 1999 to 2011. The RND is estimated using a Mixture of Two Log-Normals distribution and then the real-world density is obtained by means of the Liu et al. (2007) parametric risktransformations. The relative risk aversion is calculated for the full sample. Our estimated value of the relative risk aversion parameter is around 2.7, which is in line with other articles that have estimated this parameter for the Brazilian Economy, such as Araújo (2005) and Issler and Piqueira (2000). Our out-of-sample evaluation results showed that the RND has some ability to forecast the Brazilian Real exchange rate. Abe et all (2007) found also mixed results in the out-of-sample analysis of the RND forecast ability for exchange rate options. However, when we incorporate the risk aversion into RND in order to obtain a Real-world density, the out-of-sample performance improves substantially, with satisfactory results in both Kolmogorov and Berkowitz tests. Therefore, we would suggest not using the “pure” RND, but rather taking into account risk aversion in order to forecast the Brazilian Real exchange rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Best corporate governance practices published in the primers of Brazilian Securities and Exchange Commission and the Brazilian Corporate Governance Institute promote board independence as much as possible, as a way to increase the effectiveness of governance mechanism (Sanzovo, 2010). Therefore, this paper aims at understanding if what the managerial literature portraits as being self-evident - stricter governance, better performance - can be observed in actual evidence. The question answered is: do companies with a stricter control and monitoring system perform better than others? The method applied in this paper consists on comparing 116 companies in respect to the their independence level between top management team and board directors– being that measured by four parameters, namely, the percentage of independent outsiders in the board, the separation of CEO and chairman, the adoption of contingent compensation and the percentage of institutional investors in the ownership structure – and their financial return measured in terms return on assets (ROA) from the latest Quarterly Earnings release of 2012. From the 534 companies listed in the Stock Exchange of Sao Paulo – Bovespa – 116 were selected due to their level of corporate governance. The title “Novo Mercado” refers to the superior level of governance level within companies listed in Bovespa, as they have to follow specific criteria to assure shareholders ´protection (BM&F, 2011). Regression analyses were conducted in order to reveal the correlation level between two selected variables. The results from the regression analysis were the following: the correlation between each parameter and ROA was 10.26%; the second regression analysis conducted measured the correlation between the independence level of top management team vis-à-vis board directors – namely, CEO relative power - and ROA, leading to a multiple R of 5.45%. Understanding that the scale is a simplification of the reality, the second part of the analysis transforms all the four parameters into dummy variables, excluding what could be called as an arbitrary scale. The ultimate result from this paper led to a multiple R of 28.44%, which implies that the combination of the variables are still not enough to translate the complex reality of organizations. Nonetheless, an important finding can be taken from this paper: two variables (percentage of outside directors and percentage of institutional investor ownership) are significant in the regression, with p-value lower than 10% and with negative coefficients. In other words, counter affirming what the literature very often portraits as being self-evident – stricter governance leads to higher performance – this paper has provided evidences to believe that the increase in the formal governance structure trough outside directors in the board and ownership by institutional investor might actually lead to worse performance. The section limitations and suggestions for future researches presents some reasons explaining why, although supported by strong theoretical background, this paper faced some challenging methodological assumptions, precluding categorical statements about the level of governance – measured by four selected parameters – and the financial return in terms of financial on assets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To assess the quality of school education, much of educational research is concerned with comparisons of test scores means or medians. In this paper, we shift this focus and explore test scores data by addressing some often neglected questions. In the case of Brazil, the mean of test scores in Math for students of the fourth grade has declined approximately 0,2 standard deviation in the late 1990s. But what about changes in the distribution of scores? It is unclear whether the decline was caused by deterioration in student performance in upper and/or lower tails of the distribution. To answer this question, we propose the use of the relative distribution method developed by Handcock and Morris (1999). The advantage of this methodology is that it compares two distributions of test scores data through a single distribution and synthesizes all the differences between them. Moreover, it is possible to decompose the total difference between two distributions in a level effect (changes in median) and shape effect (changes in shape of the distribution). We find that the decline of average-test scores is mainly caused by a worsening in the position of all students throughout the distribution of scores and is not only specific to any quantile of distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O trabalho busca comparar dois conjuntos de informações para a projeção das variações do PIB brasileiro: através de modelos econométricos aplicados sobre a série univariada do PIB, e a aplicação dos mesmos modelos, mas contemplando adicionalmente o conjunto de informação com dados da estrutura a termo de taxa de juros de swap PRÉ-DI. O objetivo é verificar, assim como descrito na literatura internacional, se informações de variáveis financeiras tem a capacidade de incrementar o poder preditivo de projeções de variáveis macroeconômicas, na medida em que esses dados também embutem as expectativas dos agentes em relação ao desenvolvimento do cenário econômico. Adicionalmente, o mesmo procedimento aplicado para os dados brasileiros é aplicado sobre as informações dos Estados Unidos, buscando poder fornecer ao estudo uma base de comparação sobre os dados, tamanho da amostra e estágio de maturidade das respectivas economias. Como conclusão do resultado do trabalho está o fato de que foi possível obter um modelo no qual a inclusão do componente de mercado apresenta menores erros de projeção do que as projeções apenas univariadas, no entanto, os ganhos de projeção não demonstram grande vantagem comparativa a ponto de poder capturar o efeito de antecipação do mercado em relação ao indicador econômico como em alguns casos norte-americanos. Adicionalmente o estudo demonstra que para este trabalho e amostra de dados, mesmo diante de diferentes modelos econométricos de previsão, as projeções univariadas apresentaram resultados similares.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a two-step procedure to back out the conditional alpha of a given stock using high-frequency data. We rst estimate the realized factor loadings of the stocks, and then retrieve their conditional alphas by estimating the conditional expectation of their risk-adjusted returns. We start with the underlying continuous-time stochastic process that governs the dynamics of every stock price and then derive the conditions under which we may consistently estimate the daily factor loadings and the resulting conditional alphas. We also contribute empiri-cally to the conditional CAPM literature by examining the main drivers of the conditional alphas of the S&P 100 index constituents from January 2001 to December 2008. In addition, to con rm whether these conditional alphas indeed relate to pricing errors, we assess the performance of both cross-sectional and time-series momentum strategies based on the conditional alpha estimates. The ndings are very promising in that these strategies not only seem to perform pretty well both in absolute and relative terms, but also exhibit virtually no systematic exposure to the usual risk factors (namely, market, size, value and momentum portfolios).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Outliers são observações que parecem ser inconsistentes com as demais. Também chamadas de valores atípicos, extremos ou aberrantes, estas inconsistências podem ser causadas por mudanças de política ou crises econômicas, ondas inesperadas de frio ou calor, erros de medida ou digitação, entre outras. Outliers não são necessariamente valores incorretos, mas, quando provenientes de erros de medida ou digitação, podem distorcer os resultados de uma análise e levar o pesquisador à conclusões equivocadas. O objetivo deste trabalho é estudar e comparar diferentes métodos para detecção de anormalidades em séries de preços do Índice de Preços ao Consumidor (IPC), calculado pelo Instituto Brasileiro de Economia (IBRE) da Fundação Getulio Vargas (FGV). O IPC mede a variação dos preços de um conjunto fixo de bens e serviços componentes de despesas habituais das famílias com nível de renda situado entre 1 e 33 salários mínimos mensais e é usado principalmente como um índice de referência para avaliação do poder de compra do consumidor. Além do método utilizado atualmente no IBRE pelos analistas de preços, os métodos considerados neste estudo são: variações do Método do IBRE, Método do Boxplot, Método do Boxplot SIQR, Método do Boxplot Ajustado, Método de Cercas Resistentes, Método do Quartil, do Quartil Modificado, Método do Desvio Mediano Absoluto e Algoritmo de Tukey. Tais métodos foram aplicados em dados pertencentes aos municípios Rio de Janeiro e São Paulo. Para que se possa analisar o desempenho de cada método, é necessário conhecer os verdadeiros valores extremos antecipadamente. Portanto, neste trabalho, tal análise foi feita assumindo que os preços descartados ou alterados pelos analistas no processo de crítica são os verdadeiros outliers. O Método do IBRE é bastante correlacionado com os preços alterados ou descartados pelos analistas. Sendo assim, a suposição de que os preços alterados ou descartados pelos analistas são os verdadeiros valores extremos pode influenciar os resultados, fazendo com que o mesmo seja favorecido em comparação com os demais métodos. No entanto, desta forma, é possível computar duas medidas através das quais os métodos são avaliados. A primeira é a porcentagem de acerto do método, que informa a proporção de verdadeiros outliers detectados. A segunda é o número de falsos positivos produzidos pelo método, que informa quantos valores precisaram ser sinalizados para um verdadeiro outlier ser detectado. Quanto maior for a proporção de acerto gerada pelo método e menor for a quantidade de falsos positivos produzidos pelo mesmo, melhor é o desempenho do método. Sendo assim, foi possível construir um ranking referente ao desempenho dos métodos, identificando o melhor dentre os analisados. Para o município do Rio de Janeiro, algumas das variações do Método do IBRE apresentaram desempenhos iguais ou superiores ao do método original. Já para o município de São Paulo, o Método do IBRE apresentou o melhor desempenho. Em trabalhos futuros, espera-se testar os métodos em dados obtidos por simulação ou que constituam bases largamente utilizadas na literatura, de forma que a suposição de que os preços descartados ou alterados pelos analistas no processo de crítica são os verdadeiros outliers não interfira nos resultados.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The papers aims at considering the issue of relative efficiency measurement in the context of the public sector. In particular, we consider the efficiency measurement approach provided by Data Envelopment Analysis (DEA). The application considered the main Brazilian federal universities for the year of 1994. Given the large number of inputs and outputs, this paper advances the idea of using factor analysis to explore common dimensions in the data set. Such procedure made possible a meaningful application of DEA, which finally provided a set of efficiency scores for the universities considered .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

SOUZA, Anderson A. S. ; SANTANA, André M. ; BRITTO, Ricardo S. ; GONÇALVES, Luiz Marcos G. ; MEDEIROS, Adelardo A. D. Representation of Odometry Errors on Occupancy Grids. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.