893 resultados para Company actual risk premium
Resumo:
Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.
Resumo:
La valoración de una empresa como sistema dinámico es bastante compleja, los diferentes modelos o métodos de valoración son una aproximación teórica y por consiguiente simplificadora de la realidad. Dichos modelos, se aproximan mediante supuestos o premisas estadísticas que nos permiten hacer dicha simplificación, ejemplos de estos, son el comportamiento del inversionista o la eficiencia del mercado. Bajo el marco de un mercado emergente, este proceso presenta de indistinta forma retos paracualquier método de valoración, dado a que el mercado no obedece a los paradigmas tradicionales. Lo anterior hace referencia a que la valoración es aún más compleja, dado que los inversionistas se enfrentan a mayores riesgos y obstáculos. Así mismo, a medida que las economías se globalizan y el capital es más móvil, la valoración tomaráaún más importancia en el contexto citado. Este trabajo de gradopretende recopilar y analizar los diferentes métodos de valoración, además de identificar y aplicar aquellos que se reconocen como “buenas prácticas”. Este proceso se llevó a cabo para una de las empresas más importantes de Colombia, donde fundamentalmente se consideró el contexto de mercado emergente y específicamente el sector petrolero, como criterios para la aplicación del tradicional DCF y el práctico R&V.
Resumo:
We offer a new explanation of partial risk sharing based on coalition formation and segmentation of society in a risky environment, without assuming limited commitment and imperfect information. Heterogenous individuals in a society freely choose with whom they will share risk. A partition belonging to the core of the membership game obtains. Perfect risk sharing does not necessarily arise. Focusing on mutual insurance rule and assuming that individuals only differ with respect to risk, we show that the core partition is homophily-based. The distribution of risk affects the number and size of these coalitions. Individuals may pay a lower risk premium in riskier societies. A higher heterogeneity in risk leads to a lower degree of risk sharing. We discuss how the endogenous partition of society into risk-sharing coalitions may shed light on empirical evidence on partial risk sharing. The case of heterogenous risk aversion leads to similar results.
Resumo:
Objective: To assess the effectiveness of absolute risk, relative risk, and number needed to harm formats for medicine side effects, with and without the provision of baseline risk information. Methods: A two factor, risk increase format (relative, absolute and NNH) x baseline (present/absent) between participants design was used. A sample of 268 women was given a scenario about increase in side effect risk with third generation oral contraceptives, and were required to answer written questions to assess their understanding, satisfaction, and likelihood of continuing to take the drug. Results: Provision of baseline information significantly improved risk estimates and increased satisfaction, although the estimates were still considerably higher than the actual risk. No differences between presentation formats were observed when baseline information was presented. Without baseline information, absolute risk led to the most accurate performance. Conclusion: The findings support the importance of informing people about baseline level of risk when describing risk increases. In contrast, they offer no support for using number needed to harm. Practice implications: Health professionals should provide baseline risk information when presenting information about risk increases or decreases. More research is needed before numbers needed to harm (or treat) should be given to members of the general populations. (c) 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
In this paper, we examine the temporal stability of the evidence for two commodity futures pricing theories. We investigate whether the forecast power of commodity futures can be attributed to the extent to which they exhibit seasonality and we also consider whether there are time varying parameters or structural breaks in these pricing relationships. Compared to previous studies, we find stronger evidence of seasonality in the basis, which supports the theory of storage. The power of the basis to forecast subsequent price changes is also strengthened, while results on the presence of a risk premium are inconclusive. In addition, we show that the forecasting power of commodity futures cannot be attributed to the extent to which they exhibit seasonality. We find that in most cases where structural breaks occur, only changes in the intercepts and not the slopes are detected, illustrating that the forecast power of the basis is stable over different economic environments.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
We analyze the risk premia embedded in the S&P 500 spot index and option markets. We use a long time-series of spot prices and a large panel of option prices to jointly estimate the diffusive stock risk premium, the price jump risk premium, the diffusive variance risk premium and the variance jump risk premium. The risk premia are statistically and economically significant and move over time. Investigating the economic drivers of the risk premia, we are able to explain up to 63 % of these variations.
Resumo:
In this paper we revisit the relationship between the equity and the forward premium puzzles. We construct return-based stochastic discount factors under very mild assumptions and check whether they price correctly the equity and the foreign currency risk premia. We avoid log-linearizations by using moments restrictions associated with euler equations to test the capacity of our return-based stochastic discount factors to price returns on the relevant assets. Our main finding is that a pricing kernel constructed only using information on American domestic assets accounts for both domestic and international stylized facts that escape consumption based models. In particular, we fail to reject the null hypothesis that the foreign currency risk premium has zero price when the instrument is the own current value of the forward premium.
Resumo:
Most studies around that try to verify the existence of regulatory risk look mainly at developed countries. Looking at regulatory risk in emerging market regulated sectors is no less important to improving and increasing investment in those markets. This thesis comprises three papers comprising regulatory risk issues. In the first Paper I check whether CAPM betas capture information on regulatory risk by using a two-step procedure. In the first step I run Kalman Filter estimates and then use these estimated betas as inputs in a Random-Effect panel data model. I find evidence of regulatory risk in electricity, telecommunications and all regulated sectors in Brazil. I find further evidence that regulatory changes in the country either do not reduce or even increase the betas of the regulated sectors, going in the opposite direction to the buffering hypothesis as proposed by Peltzman (1976). In the second Paper I check whether CAPM alphas say something about regulatory risk. I investigate a methodology similar to those used by some regulatory agencies around the world like the Brazilian Electricity Regulatory Agency (ANEEL) that incorporates a specific component of regulatory risk in setting tariffs for regulated sectors. I find using SUR estimates negative and significant alphas for all regulated sectors especially the electricity and telecommunications sectors. This runs in the face of theory that predicts alphas that are not statistically different from zero. I suspect that the significant alphas are related to misspecifications in the traditional CAPM that fail to capture true regulatory risk factors. On of the reasons is that CAPM does not consider factors that are proven to have significant effects on asset pricing, such as Fama and French size (ME) and price-to-book value (ME/BE). In the third Paper, I use two additional factors as controls in the estimation of alphas, and the results are similar. Nevertheless, I find evidence that the negative alphas may be the result of the regulated sectors premiums associated with the three Fama and French factors, particularly the market risk premium. When taken together, ME and ME/BE regulated sectors diminish the statistical significance of market factors premiums, especially for the electricity sector. This show how important is the inclusion of these factors, which unfortunately is scarce in emerging markets like Brazil.
Resumo:
O conceito de paridade coberta de juros sugere que, na ausência de barreiras para arbitragem entre mercados, o diferencial de juros entre dois ativos, idênticos em todos os pontos relevantes, com exceção da moeda de denominação, na ausência de risco de variação cambial deve ser igual a zero. Porém, uma vez que existam riscos não diversificáveis, representados pelo risco país, inerentes a economias emergentes, os investidores exigirão uma taxa de juros maior que a simples diferença entre as taxas de juros doméstica e externa. Este estudo tem por objetivo avaliar se o ajustamento das condições de paridade coberta de juros por prêmios de risco é suficiente para a validação da relação de não-arbitragem para o mercado brasileiro, durante o período de 2007 a 2010. O risco país contamina todos os ativos financeiros emitidos em uma determinada economia e pode ser descrito como a somatória do risco de default (ou risco soberano) e do risco de conversibilidade percebidos pelo mercado. Para a estimação da equação de não arbitragem foram utilizadas regressões por Mínimos Quadrados Ordinários, parâmetros variantes no tempo (TVP) e Mínimos Quadrados Recursivos, e os resultados obtidos não são conclusivos sobre a validação da relação de paridade coberta de juros, mesmo ajustando para prêmio de risco. Erros de medidas de dados, custo de transação e intervenções e políticas restritivas no mercado de câmbio podem ter contribuído para este resultado.
Resumo:
The Forward Premium Puzzle (FPP) is how the empirical observation of a negative relation between future changes in the spot rates and the forward premium is known. Modeling this forward bias as a risk premium and under weak assumptions on the behavior of the pricing kernel, we characterize the potential bias that is present in the regressions where the FPP is observed and we identify the necessary and sufficient conditions that the pricing kernel has to satisfy to account for the predictability of exchange rate movements. Next, we estimate the pricing kernel applying two methods: i) one, du.e to Araújo et aI. (2005), that exploits the fact that the pricing kernel is a serial correlation common feature of asset prices, and ii) a traditional principal component analysis used as a procedure 1;0 generate a statistical factor modeI. Then, using on the sample and out of the sample exercises, we are able to show that the same kernel that explains the Equity Premi um Puzzle (EPP) accounts for the FPP in all our data sets. This suggests that the quest for an economic mo deI that generates a pricing kernel which solves the EPP may double its prize by simultaneously accounting for the FPP.
Resumo:
This paper proposes a new novel to calculate tail risks incorporating risk-neutral information without dependence on options data. Proceeding via a non parametric approach we derive a stochastic discount factor that correctly price a chosen panel of stocks returns. With the assumption that states probabilities are homogeneous we back out the risk neutral distribution and calculate five primitive tail risk measures, all extracted from this risk neutral probability. The final measure is than set as the first principal component of the preliminary measures. Using six Fama-French size and book to market portfolios to calculate our tail risk, we find that it has significant predictive power when forecasting market returns one month ahead, aggregate U.S. consumption and GDP one quarter ahead and also macroeconomic activity indexes. Conditional Fama-Macbeth two-pass cross-sectional regressions reveal that our factor present a positive risk premium when controlling for traditional factors.
Resumo:
The dissertation goal is to quantify the tail risk premium embedded into hedge funds' returns. Tail risk is the probability of extreme large losses. Although it is a rare event, asset pricing theory suggests that investors demand compensation for holding assets sensitive to extreme market downturns. By de nition, such events have a small likelihood to be represented in the sample, what poses a challenge to estimate the e ects of tail risk by means of traditional approaches such as VaR. The results show that it is not su cient to account for the tail risk stemming from equities markets. Active portfolio management employed by hedge funds demand a speci c measure to estimate and control tail risk. Our proposed factor lls that void inasmuch it presents explanatory power both over the time series as well as the cross-section of funds' returns.
Resumo:
Mathrix is an e-learning math website that will be launched in March 2016. This master thesis offered a unique chance to interact with experienced supervisors in venture capitalism and project investment. It could serve as guidelines for entrepreneurs who intend to raise funds. Starting with the company’s business plan, the thesis focuses on estimating the company’s value with its return on investment using three scenarios and taking into consideration the risks evolved.
Resumo:
This paper develops a reduced form three-factor model which includes a liquidity proxy of market conditions which is then used to provide implicit prices. The model prices are then compared with observed market prices of credit default swaps to determine if swap rates adequately reflect market risks. The findings of the analysis illustrate the importance of liquidity in the valuation process. Moreover, market liquidity, a measure of investors. willingness to commit resources in the credit default swap (CDS) market, was also found to improve the valuation of investors. autonomous credit risk. Thus a failure to include a liquidity proxy could underestimate the implied autonomous credit risk. Autonomous credit risk is defined as the fractional credit risk which does not vary with changes in market risk and liquidity conditions.