918 resultados para Capital assets pricing model
Resumo:
In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.
Resumo:
Ce mémoire présente une version dividende du Capital Asset Pricing Model (CAPM). Selon le modèle développé ici, il existe une relation à l'équilibre entre le rendement en dividendes et le risque systématique. Cette relation est linéaire et négative et peut-être dérivée dans un monde avec ou sans impôt. Une application de ce modèle est possible lorsqu'on évalue la valeur théorique d'une action ordinaire à l'aide du taux net d'actualisation. Au total, le test empirique indique qu'il y a une concordance observable entre les implications majeures du modèle et les faits.
THE COSTS OF RAISING EQUITY RATIO FOR BANKS Evidence from publicly listed banks operating in Finland
Resumo:
The solvency rate of banks differs from the other corporations. The equity rate of a bank is lower than it is in corporations of other field of business. However, functional banking industry has huge impact on the whole society. The equity rate of a bank needs to be higher because that makes the banking industry more stable as the probability of the banks going under will decrease. If a bank goes belly up, the government will be compensating the deposits since it has granted the bank’s depositors a deposit insurance. This means that the payment comes from the tax payers in the last resort. Economic conversation has long concentrated on the costs of raising equity ratio. It has been a common belief that raising equity ratio also increases the banks’ funding costs in the same phase and these costs will be redistributed to the banks customers as higher service charges. Regardless of the common belief, the actual reaction of the funding costs to the higher equity ratio has been studied only a little in Europe and no study has been constructed in Finland. Before it can be calculated whether the higher stability of the banking industry that is caused by the raise in equity levels compensates the extra costs in funding costs, it must be calculated how much the actual increase in the funding costs is. Currently the banking industry is controlled by complex and heavy regulation. To maintain such a complex system inflicts major costs in itself. This research leans on the Modigliani and Miller theory, which shows that the finance structure of a firm is irrelevant to their funding costs. In addition, this research follows the calculations of Miller, Yang ja Marcheggianon (2012) and Vale (2011) where they calculate the funding costs after the doubling of specific banks’ equity ratios. The Finnish banks studied in this research are Nordea and Danske Bank because they are the two largest banks operating in Finland and they both also have the right company form to able the calculations. To calculate the costs of halving their leverages this study used the Capital Asset Pricing Model. The halving of the leverage of Danske Bank raised its funding costs for 16—257 basis points depending on the method of assessment. For Nordea the increase in funding costs was 11—186 basis points when its leverage was halved. On the behalf of the results found in this study it can be said that the doubling of an equity ratio does not increase the funding costs of a bank one by one. Actually the increase is quite modest. More solvent banks would increase the stability of the banking industry enormously while the increase in funding costs is low. If the costs of bank regulation exceeds the increase in funding costs after the higher equity ratio, it can be thought that this is the better way of stabilizing the banking industry rather than heavy regulation.
THE COSTS OF RAISING EQUITY RATIO FOR BANKS Evidence from publicly listed banks operating in Finland
Resumo:
The solvency rate of banks differs from the other corporations. The equity rate of a bank is lower than it is in corporations of other field of business. However, functional banking industry has huge impact on the whole society. The equity rate of a bank needs to be higher because that makes the banking industry more stable as the probability of the banks going under will decrease. If a bank goes belly up, the government will be compensating the deposits since it has granted the bank’s depositors a deposit insurance. This means that the payment comes from the tax payers in the last resort. Economic conversation has long concentrated on the costs of raising equity ratio. It has been a common belief that raising equity ratio also increases the banks’ funding costs in the same phase and these costs will be redistributed to the banks customers as higher service charges. Regardless of the common belief, the actual reaction of the funding costs to the higher equity ratio has been studied only a little in Europe and no study has been constructed in Finland. Before it can be calculated whether the higher stability of the banking industry that is caused by the raise in equity levels compensates the extra costs in funding costs, it must be calculated how much the actual increase in the funding costs is. Currently the banking industry is controlled by complex and heavy regulation. To maintain such a complex system inflicts major costs in itself. This research leans on the Modigliani and Miller theory, which shows that the finance structure of a firm is irrelevant to their funding costs. In addition, this research follows the calculations of Miller, Yang ja Marcheggianon (2012) and Vale (2011) where they calculate the funding costs after the doubling of specific banks’ equity ratios. The Finnish banks studied in this research are Nordea and Danske Bank because they are the two largest banks operating in Finland and they both also have the right company form to able the calculations. To calculate the costs of halving their leverages this study used the Capital Asset Pricing Model. The halving of the leverage of Danske Bank raised its funding costs for 16—257 basis points depending on the method of assessment. For Nordea the increase in funding costs was 11—186 basis points when its leverage was halved. On the behalf of the results found in this study it can be said that the doubling of an equity ratio does not increase the funding costs of a bank one by one. Actually the increase is quite modest. More solvent banks would increase the stability of the banking industry enormously while the increase in funding costs is low. If the costs of bank regulation exceeds the increase in funding costs after the higher equity ratio, it can be thought that this is the better way of stabilizing the banking industry rather than heavy regulation.
Resumo:
Company valuation models attempt to estimate the value of a company in two stages: (1) comprising of a period of explicit analysis and (2) based on unlimited production period of cash flows obtained through a mathematical approach of perpetuity, which is the terminal value. In general, these models, whether they belong to the Dividend Discount Model (DDM), the Discount Cash Flow (DCF), or RIM (Residual Income Models) group, discount one attribute (dividends, free cash flow, or results) to a given discount rate. This discount rate, obtained in most cases by the CAPM (Capital asset pricing model) or APT (Arbitrage pricing theory) allows including in the analysis the cost of invested capital based on the risk taking of the attributes. However, one cannot ignore that the second stage of valuation that is usually 53-80% of the company value (Berkman et al., 1998) and is loaded with uncertainties. In this context, particular attention is needed to estimate the value of this portion of the company, under penalty of the assessment producing a high level of error. Mindful of this concern, this study sought to collect the perception of European and North American financial analysts on the key features of the company that they believe contribute most to its value. For this feat, we used a survey with closed answers. From the analysis of 123 valid responses using factor analysis, the authors conclude that there is great importance attached (1) to the life expectancy of the company, (2) to liquidity and operating performance, (3) to innovation and ability to allocate resources to R&D, and (4) to management capacity and capital structure, in determining the value of a company or business in long term. These results contribute to our belief that we can formulate a model for valuating companies and businesses where the results to be obtained in the evaluations are as close as possible to those found in the stock market
Resumo:
The paper analyzes a two period general equilibrium model with individual risk and moral hazard. Each household faces two individual states of nature in the second period. These states solely differ in the household's vector of initial endowments, which is strictly larger in the first state (good state) than in the second state (bad state). In the first period households choose a non-observable action. Higher leveis of action give higher probability of the good state of nature to occur, but lower leveIs of utility. Households have access to an insurance market that allows transfer of income across states of oature. I consider two models of financiaI markets, the price-taking behavior model and the nonlínear pricing modelo In the price-taking behavior model suppliers of insurance have a belief about each household's actíon and take asset prices as given. A variation of standard arguments shows the existence of a rational expectations equilibrium. For a generic set of economies every equilibrium is constraíned sub-optímal: there are commodity prices and a reallocation of financiaI assets satisfying the first period budget constraint such that, at each household's optimal choice given those prices and asset reallocation, markets clear and every household's welfare improves. In the nonlinear pricing model suppliers of insurance behave strategically offering nonlinear pricing contracts to the households. I provide sufficient conditions for the existence of equilibrium and investigate the optimality properties of the modeI. If there is a single commodity then every equilibrium is constrained optimaI. Ir there is more than one commodity, then for a generic set of economies every equilibrium is constrained sub-optimaI.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
The objective of this Master’s thesis is to create a calculation model for working capital management in value chains. The study has been executed using literature review and constructive research methods. Constructive research methods were mainly modeling. The theory in this thesis is founded in research articles and management literature. The model is developed for students and researchers. They can use the model for working capital management and comparing firms to each other. The model can also be used to cash management. The model tells who benefits and who suffers most in the value chain. Companies and value chains cash flows can be seen. By using the model can be seen are the set targets really achieved. The amount of operational working capital can be observed. The model enables user to simulate the amount of working capital. The created model is based on cash conversion cycle, return on investment and cash flow forecasting. The model is tested with carefully considered figures which seem to be though realistic. The modeled value chain is literally a chain. Implementing this model requires from the user that he/she have some kind of understanding about working capital management and some figures from balance sheet and income statement. By using this model users can improve their knowledge about working capital management in value chains.
El sistema multifondos de pensiones colombiano bajo las nuevas teorías del comportamiento financiero
Resumo:
En Colombia, después de casi dos décadas de la creación del régimen de cuentas privadas, se implementó una reforma donde se pasa de un sistema con un unico fondo a uno multifondos. Este tipo de reformas se vienen implementando en diferentes paises europeos y de Latino America. A la luz de las teorías clásicas dicha reforma trae mejoras en el bienestar de los individuos; sin embargo, la literatura sobre las nuevas teorías del comportamiento sugiere que los individuos no siempre toman decisiones que están de acuerdo con los supuestos de las teorías clásicas. Este trabajo estudia esta reforma en Colombia bajo algunas de las teorías del comportamiento financiero. Se encuentra que aún cuando el afiliado se quede en la opción default , o actúe con aversión a la pérdida, va a obtener valores en sus cuentas privadas mayores a las que obtendría con un sistema de un único fondo.
El sistema multifondos de pensiones colombiano bajo las nuevas teorías del comportamiento financiero
Resumo:
En Colombia, después de casi dos décadas de la creación del régimen de cuentas privadas, se implementó una reforma donde se pasa de un sistema con un único fondo a uno multifondos. Este tipo de reformas se vienen implementando en diferentes países europeos y de Latino América. A la luz de las teorías clásicas dicha reforma trae mejoras en el bienestar de los individuos; sin embargo, la literatura sobre las nuevas teorías del comportamiento sugiere que los individuos no siempre toman decisiones que están de acuerdo con los supuestos de las teorías clásicas. Este trabajo estudia esta reforma en Colombia bajo algunas de las teorías del comportamiento financiero. Se encuentra que aún cuando el afiliado se quede en la opción default , o actúe con aversión a la pérdida, va a obtener valores en sus cuentas privadas mayores a las que obtendría con un sistema de un único fondo.
Resumo:
La obtención de una ventaja competitiva, el desarrollo, el crecimiento, la perdurabilidad, entre otros, son los aspectos que buscan las organizaciones a través de las estrategias que se definen. Sin embargo, no es suficiente con diseñar las metas y los objetivos que se quieren alcanzar, es necesario aterrizar estos propósitos en planes de acción e involucrar a todos los miembros de la organización, lo cual se consigue a través de la implantación de la estrategia. En este sentido, la etapa de implantación de la estrategia en una organización, da curso al camino establecido en la etapa de formulación de la estrategia, por lo tanto, se relaciona directamente con su éxito o su fracaso. No obstante, este proceso no depende de algunos pocos miembros de la organización, de directivos o de funcionarios, sino que depende de la buena sincronización y armonía de todos aquellos que hacen parte de ella. La presente investigación a través de la revisión teórica y de evidencias empíricas, busca poner de manifiesto la incidencia de dos aspectos clave en la organización sobre la implantación de la estrategia, por un lado, los líderes, a partir de sus competencias interpersonales y por otro el capital humano, a partir de sus valores. Los resultados obtenidos muestran que tanto las competencias del líder como los valores del capital humano son determinantes para la adecuada implantación de la estrategia organizacional.
Resumo:
After Modigliani and Miller (1958) presented their capital structure irrelevance proposition, analysis of corporate Önancing choices involving debt and equity instruments have generally followed two trends in the literature, where models either incorporate informational asymmetries or introduce tax beneÖts in order to explain optimal capital structure determination (Myers, 2002). None of these features is present in this paper, which develops an asset pricing model with the purpose of providing a positive theory of corporate capital structure by replicating main aspects of standard contractual practice observed in real markets. Alternatively, the imperfect market structure of the economy is tailored to match what is most common in corporate reality. Allowance for default on corporate debt with an associated penalty of seizure of Örmís future cash áows by creditors is introduced, for instance. In this context, a qualitative assessment of Önancial managersídecisions is carried out through numerical procedures.
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Electric power grids throughout the world suffer from serious inefficiencies associated with under-utilization due to demand patterns, engineering design and load following approaches in use today. These grids consume much of the world’s energy and represent a large carbon footprint. From material utilization perspectives significant hardware is manufactured and installed for this infrastructure often to be used at less than 20-40% of its operational capacity for most of its lifetime. These inefficiencies lead engineers to require additional grid support and conventional generation capacity additions when renewable technologies (such as solar and wind) and electric vehicles are to be added to the utility demand/supply mix. Using actual data from the PJM [PJM 2009] the work shows that consumer load management, real time price signals, sensors and intelligent demand/supply control offer a compelling path forward to increase the efficient utilization and carbon footprint reduction of the world’s grids. Underutilization factors from many distribution companies indicate that distribution feeders are often operated at only 70-80% of their peak capacity for a few hours per year, and on average are loaded to less than 30-40% of their capability. By creating strong societal connections between consumers and energy providers technology can radically change this situation. Intelligent deployment of smart sensors, smart electric vehicles, consumer-based load management technology very high saturations of intermittent renewable energy supplies can be effectively controlled and dispatched to increase the levels of utilization of existing utility distribution, substation, transmission, and generation equipment. The strengthening of these technology, society and consumer relationships requires rapid dissemination of knowledge (real time prices, costs & benefit sharing, demand response requirements) in order to incentivize behaviors that can increase the effective use of technological equipment that represents one of the largest capital assets modern society has created.