984 resultados para Pricing model
Resumo:
In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.
Resumo:
This dissertation extends the empirical industrial organization literature with two essays on strategic decisions of firms in imperfectly competitive markets and one essay on how inertia in consumer choice can result in significant welfare losses. Using data from the airline industry I study a well-known puzzle in the literature whereby incumbent firms decrease fares when Southwest Airlines emerges as a potential entrant, but is not (yet) competing directly. In the first essay I describe this so-called Southwest Effect and use reduced-form analysis to offer possible explanations for why firms may choose to forgo profits today rather than wait until Southwest operates the route. The analysis suggests that incumbent firms are attempting to signal to Southwest that entry is unprofitable so as to deter its entry. The second essay develops this theme by extending a classic model from the IO literature, limit pricing, to a dynamic setting. Calibrations indicate the price cuts observed in the data can be captured by a dynamic limit pricing model. The third essay looks at another concentrated industry, mobile telecoms, and studies how inertia in choice (be it inattention or switching costs) can lead to consumers being on poorly matched cellphone plans and how a simple policy proposal can have a considerable effect on welfare.
Resumo:
Ce mémoire présente une version dividende du Capital Asset Pricing Model (CAPM). Selon le modèle développé ici, il existe une relation à l'équilibre entre le rendement en dividendes et le risque systématique. Cette relation est linéaire et négative et peut-être dérivée dans un monde avec ou sans impôt. Une application de ce modèle est possible lorsqu'on évalue la valeur théorique d'une action ordinaire à l'aide du taux net d'actualisation. Au total, le test empirique indique qu'il y a une concordance observable entre les implications majeures du modèle et les faits.
THE COSTS OF RAISING EQUITY RATIO FOR BANKS Evidence from publicly listed banks operating in Finland
Resumo:
The solvency rate of banks differs from the other corporations. The equity rate of a bank is lower than it is in corporations of other field of business. However, functional banking industry has huge impact on the whole society. The equity rate of a bank needs to be higher because that makes the banking industry more stable as the probability of the banks going under will decrease. If a bank goes belly up, the government will be compensating the deposits since it has granted the bank’s depositors a deposit insurance. This means that the payment comes from the tax payers in the last resort. Economic conversation has long concentrated on the costs of raising equity ratio. It has been a common belief that raising equity ratio also increases the banks’ funding costs in the same phase and these costs will be redistributed to the banks customers as higher service charges. Regardless of the common belief, the actual reaction of the funding costs to the higher equity ratio has been studied only a little in Europe and no study has been constructed in Finland. Before it can be calculated whether the higher stability of the banking industry that is caused by the raise in equity levels compensates the extra costs in funding costs, it must be calculated how much the actual increase in the funding costs is. Currently the banking industry is controlled by complex and heavy regulation. To maintain such a complex system inflicts major costs in itself. This research leans on the Modigliani and Miller theory, which shows that the finance structure of a firm is irrelevant to their funding costs. In addition, this research follows the calculations of Miller, Yang ja Marcheggianon (2012) and Vale (2011) where they calculate the funding costs after the doubling of specific banks’ equity ratios. The Finnish banks studied in this research are Nordea and Danske Bank because they are the two largest banks operating in Finland and they both also have the right company form to able the calculations. To calculate the costs of halving their leverages this study used the Capital Asset Pricing Model. The halving of the leverage of Danske Bank raised its funding costs for 16—257 basis points depending on the method of assessment. For Nordea the increase in funding costs was 11—186 basis points when its leverage was halved. On the behalf of the results found in this study it can be said that the doubling of an equity ratio does not increase the funding costs of a bank one by one. Actually the increase is quite modest. More solvent banks would increase the stability of the banking industry enormously while the increase in funding costs is low. If the costs of bank regulation exceeds the increase in funding costs after the higher equity ratio, it can be thought that this is the better way of stabilizing the banking industry rather than heavy regulation.
Resumo:
This thesis studies the field of asset price bubbles. It is comprised of three independent chapters. Each of these chapters either directly or indirectly analyse the existence or implications of asset price bubbles. The type of bubbles assumed in each of these chapters is consistent with rational expectations. Thus, the kind of price bubbles investigated here are known as rational bubbles in the literature. The following describes the three chapters. Chapter 1: This chapter attempts to explain the recent US housing price bubble by developing a heterogeneous agent endowment economy asset pricing model with risky housing, endogenous collateral and defaults. Investment in housing is subject to an idiosyncratic risk and some mortgages are defaulted in equilibrium. We analytically derive the leverage or the endogenous loan to value ratio. This variable comes from a limited participation constraint in a one period mortgage contract with monitoring costs. Our results show that low values of housing investment risk produces a credit easing effect encouraging excess leverage and generates credit driven rational price bubbles in the housing good. Conversely, high values of housing investment risk produces a credit crunch characterized by tight borrowing constraints, low leverage and low house prices. Furthermore, the leverage ratio was found to be procyclical and the rate of defaults countercyclical consistent with empirical evidence. Chapter 2: It is widely believed that financial assets have considerable persistence and are susceptible to bubbles. However, identification of this persistence and potential bubbles is not straightforward. This chapter tests for price bubbles in the United States housing market accounting for long memory and structural breaks. The intuition is that the presence of long memory negates price bubbles while the presence of breaks could artificially induce bubble behaviour. Hence, we use procedures namely semi-parametric Whittle and parametric ARFIMA procedures that are consistent for a variety of residual biases to estimate the value of the long memory parameter, d, of the log rent-price ratio. We find that the semi-parametric estimation procedures robust to non-normality and heteroskedasticity errors found far more bubble regions than parametric ones. A structural break was identified in the mean and trend of all the series which when accounted for removed bubble behaviour in a number of regions. Importantly, the United States housing market showed evidence for rational bubbles at both the aggregate and regional levels. In the third and final chapter, we attempt to answer the following question: To what extend should individuals participate in the stock market and hold risky assets over their lifecycle? We answer this question by employing a lifecycle consumption-portfolio choice model with housing, labour income and time varying predictable returns where the agents are constrained in the level of their borrowing. We first analytically characterize and then numerically solve for the optimal asset allocation on the risky asset comparing the return predictability case with that of IID returns. We successfully resolve the puzzles and find equity holding and participation rates close to the data. We also find that return predictability substantially alter both the level of risky portfolio allocation and the rate of stock market participation. High factor (dividend-price ratio) realization and high persistence of factor process indicative of stock market bubbles raise the amount of wealth invested in risky assets and the level of stock market participation, respectively. Conversely, rare disasters were found to bring down these rates, the change being severe for investors in the later years of the life-cycle. Furthermore, investors following time varying returns (return predictability) hedged background risks significantly better than the IID ones.
THE COSTS OF RAISING EQUITY RATIO FOR BANKS Evidence from publicly listed banks operating in Finland
Resumo:
The solvency rate of banks differs from the other corporations. The equity rate of a bank is lower than it is in corporations of other field of business. However, functional banking industry has huge impact on the whole society. The equity rate of a bank needs to be higher because that makes the banking industry more stable as the probability of the banks going under will decrease. If a bank goes belly up, the government will be compensating the deposits since it has granted the bank’s depositors a deposit insurance. This means that the payment comes from the tax payers in the last resort. Economic conversation has long concentrated on the costs of raising equity ratio. It has been a common belief that raising equity ratio also increases the banks’ funding costs in the same phase and these costs will be redistributed to the banks customers as higher service charges. Regardless of the common belief, the actual reaction of the funding costs to the higher equity ratio has been studied only a little in Europe and no study has been constructed in Finland. Before it can be calculated whether the higher stability of the banking industry that is caused by the raise in equity levels compensates the extra costs in funding costs, it must be calculated how much the actual increase in the funding costs is. Currently the banking industry is controlled by complex and heavy regulation. To maintain such a complex system inflicts major costs in itself. This research leans on the Modigliani and Miller theory, which shows that the finance structure of a firm is irrelevant to their funding costs. In addition, this research follows the calculations of Miller, Yang ja Marcheggianon (2012) and Vale (2011) where they calculate the funding costs after the doubling of specific banks’ equity ratios. The Finnish banks studied in this research are Nordea and Danske Bank because they are the two largest banks operating in Finland and they both also have the right company form to able the calculations. To calculate the costs of halving their leverages this study used the Capital Asset Pricing Model. The halving of the leverage of Danske Bank raised its funding costs for 16—257 basis points depending on the method of assessment. For Nordea the increase in funding costs was 11—186 basis points when its leverage was halved. On the behalf of the results found in this study it can be said that the doubling of an equity ratio does not increase the funding costs of a bank one by one. Actually the increase is quite modest. More solvent banks would increase the stability of the banking industry enormously while the increase in funding costs is low. If the costs of bank regulation exceeds the increase in funding costs after the higher equity ratio, it can be thought that this is the better way of stabilizing the banking industry rather than heavy regulation.
Resumo:
Investors value the special attributes of monetary assets (e.g., exchangeability, liquidity, and safety) and pay a premium for holding them in the form of a lower return rate -- The user cost of holding monetary assets can be measured approximately by the difference between the returns on illiquid risky assets and those of safer liquid assets -- A more appropriate measure should adjust this difference by the differential risk of the assets in question -- We investigate the impact that time non-separable preferences has on the estimation of the risk-adjusted user cost of money -- Using U.K. data from 1965Q1 to 2011Q1, we estimate a habit-based asset pricing model with money in the utility function and find that the risk adjustment for risky monetary assets is negligible -- Thus, researchers can dispense with risk adjusting the user cost of money in constructing monetary aggregate indexes
Resumo:
Dissertação de mest. em Ciências Económicas e Empresariais, Unidade de Ciências Económicas e Empresariais, Univ. do Algarve, 1996
Resumo:
To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.
Resumo:
Company valuation models attempt to estimate the value of a company in two stages: (1) comprising of a period of explicit analysis and (2) based on unlimited production period of cash flows obtained through a mathematical approach of perpetuity, which is the terminal value. In general, these models, whether they belong to the Dividend Discount Model (DDM), the Discount Cash Flow (DCF), or RIM (Residual Income Models) group, discount one attribute (dividends, free cash flow, or results) to a given discount rate. This discount rate, obtained in most cases by the CAPM (Capital asset pricing model) or APT (Arbitrage pricing theory) allows including in the analysis the cost of invested capital based on the risk taking of the attributes. However, one cannot ignore that the second stage of valuation that is usually 53-80% of the company value (Berkman et al., 1998) and is loaded with uncertainties. In this context, particular attention is needed to estimate the value of this portion of the company, under penalty of the assessment producing a high level of error. Mindful of this concern, this study sought to collect the perception of European and North American financial analysts on the key features of the company that they believe contribute most to its value. For this feat, we used a survey with closed answers. From the analysis of 123 valid responses using factor analysis, the authors conclude that there is great importance attached (1) to the life expectancy of the company, (2) to liquidity and operating performance, (3) to innovation and ability to allocate resources to R&D, and (4) to management capacity and capital structure, in determining the value of a company or business in long term. These results contribute to our belief that we can formulate a model for valuating companies and businesses where the results to be obtained in the evaluations are as close as possible to those found in the stock market
Resumo:
El presente artículo, presenta un análisis de las decisiones de estructuración de capital de la compañía Merck Sharp & Dome S.A.S, desde la perspectiva de las finanzas comportamentales, comparando los métodos utilizados actualmente por la compañía seleccionada con la teoría tradicional de las finanzas, para así poder evaluar el desempeño teórico y real. Incorporar elementos comportamentales dentro del estudio permite profundizar más sobre de las decisiones corporativas en un contexto más cercano a los avances investigativos de las finanzas del comportamiento, lo cual lleva a que el análisis de este artículo se enfoque en la identificación y entendimiento de los sesgos de exceso de confianza y statu quo, pero sobre todo su implicación en las decisiones de financiación. Según la teoría tradicional el proceso de estructuración de capital se guía por los costos, pero este estudio de caso permitió observar que en la práctica esta relación de costo-decisión está en un segundo lugar, después de la relación riesgo-decisión a la hora del proceso de estructuración de capital.
Resumo:
The growth experimented in recent years in both the variety and volume of structured products implies that banks and other financial institutions have become increasingly exposed to model risk. In this article we focus on the model risk associated with the local volatility (LV) model and with the Variance Gamma (VG) model. The results show that the LV model performs better than the VG model in terms of its ability to match the market prices of European options. Nevertheless, both models are subject to significant pricing errors when compared with the stochastic volatility framework.
Resumo:
In 2009, the Sheffield Alcohol Research Group (SARG) at Sheffield University developed the Sheffield Alcohol Policy Model version 2.0 (SAPM) to appraise the potential impact of alcohol policies, including different levels of MUP, for the population of England. In 2013, SARG were commissioned by the DHSSPS and the Department for Social Development to adapt the Sheffield Model to NI in order to appraise the potential impact of a range of alcohol pricing policies. The present report represents the results of this work. Estimates from the Northern Ireland (NI) adaptation of the Sheffield Alcohol Policy Model - version 3 - (SAPM3) suggest: 1. Minimum Unit Pricing (MUP) policies would be effective in reducing alcohol consumption, alcohol related harms (including alcohol-related deaths, hospitalisations, crimes and workplace absences) and the costs associated with those harms. 2. A ban on below-cost selling (implemented as a ban on selling alcohol for below the cost of duty plus the VAT payable on that duty) would have a negligible impact on alcohol consumption or related harms. 3. A ban on price-based promotions in the off-trade, either alone or in tandem with an MUP policy would be effective in reducing alcohol consumption, related harms and associated costs. 4. MUP and promotion ban policies would only have a small impact on moderate drinkers at all levels of income. Somewhat larger impacts would be experienced by increasing risk drinkers, with the most substantial effects being experienced by high risk drinkers. 5. MUP and promotion ban policies would have larger impacts on those in poverty, particularly high risk drinkers, than those not in poverty. However, those in poverty also experience larger relative gains in health and are estimated to marginally reduce their spending due to their reduced drinking under the majority of policies åÊ
Resumo:
Proposed Model for Reference Pricing and Generic Substitution Click here to download PDF 137KB Â Click here to download the Frequently Asked Questions PDF 23KB
Resumo:
By means of classical Itô's calculus we decompose option prices asthe sum of the classical Black-Scholes formula with volatility parameterequal to the root-mean-square future average volatility plus a term dueby correlation and a term due to the volatility of the volatility. Thisdecomposition allows us to develop first and second-order approximationformulas for option prices and implied volatilities in the Heston volatilityframework, as well as to study their accuracy. Numerical examples aregiven.