946 resultados para asset pricing tests
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The paper describes an implicit finite difference approach to the pricing of American options on assets with a stochastic volatility. A multigrid procedure is described for the fast iterative solution of the discrete linear complementarity problems that result. The accuracy and performance of this approach is improved considerably by a strike-price related analytic transformation of asset prices and adaptive time-stepping.
Resumo:
Financial modelling in the area of option pricing involves the understanding of the correlations between asset and movements of buy/sell in order to reduce risk in investment. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. In turn, analysis tools rely on fast numerical algorithms for the solution of financial mathematical models. There are many different financial activities apart from shares buy/sell activities. The main aim of this chapter is to discuss a distributed algorithm for the numerical solution of a European option. Both linear and non-linear cases are considered. The algorithm is based on the concept of the Laplace transform and its numerical inverse. The scalability of the algorithm is examined. Numerical tests are used to demonstrate the effectiveness of the algorithm for financial analysis. Time dependent functions for volatility and interest rates are also discussed. Applications of the algorithm to non-linear Black-Scholes equation where the volatility and the interest rate are functions of the option value are included. Some qualitative results of the convergence behaviour of the algorithm is examined. This chapter also examines the various computational issues of the Laplace transformation method in terms of distributed computing. The idea of using a two-level temporal mesh in order to achieve distributed computation along the temporal axis is introduced. Finally, the chapter ends with some conclusions.
Outperformance in exchange-traded fund pricing deviations: Generalized control of data snooping bias
Resumo:
An investigation into exchange-traded fund (ETF) outperforrnance during the period 2008-2012 is undertaken utilizing a data set of 288 U.S. traded securities. ETFs are tested for net asset value (NAV) premium, underlying index and market benchmark outperformance, with Sharpe, Treynor, and Sortino ratios employed as risk-adjusted performance measures. A key contribution is the application of an innovative generalized stepdown procedure in controlling for data snooping bias. We find that a large proportion of optimized replication and debt asset class ETFs display risk-adjusted premiums with energy and precious metals focused funds outperforming the S&P 500 market benchmark.
Resumo:
Os sistemas aquáticos naturais podem estar sujeitos frequentemente a entrada de tóxicos, quer seja através da lixiviação dos campos agrícolas ou da descarga por parte de unidades industriais. Avaliar o impacto potencial destes contaminantes nos sistemas aquáticos é muito importante, porque pode promover consequências sérias no balanço ecológico dos ecossistemas. Os efeitos de níveis sub-letais destes tóxicos nas populações aquáticas são detectados, em muitos casos, somente após diversas gerações, dependendo da espécie e do contaminante. O comportamento animal é considerado como sendo a primeira linha de defesa perante estímulos ambientais, e pode ser uma representação de alterações fisiológicas no organismo, sendo portanto um indicador excelente de alterações ambientais. O desenvolvimento dos sistemas de aviso prévio que integram parâmetros comportamentais pode ajudar a prever mais rapidamente possíveis alterações ao nível das populações naturais, do que a utilização de testes ecotoxicológicos padrão com a mesma finalidade. O conhecimento acerca de possíveis implicações devido a alterações comportamentais, em organismos bentónicos e em populações do campo sujeitas a tóxicos, é ainda escasso. Sabendo isto, neste estudo pretendeu-se investigar como o comportamento de Chironomus riparius – usando um biomonitor em tempo real – e outros parâmetros tais como crescimento, emergência de adultos, bioacumulação e biomarcadores, são afectados pela exposição a imidacloprid e ao mercúrio, que foram seleccionados como contaminantes. Os resultados demonstraram que a exposição às concentrações sub-letais de imidacloprid afecta o crescimento e o comportamento dos quironomídeos e que estes organismos podem recuperar de uma exposição curta ao insecticida. O comportamento que corresponde à ventilação de C. riparius revelou-se como um parâmetro mais sensível do que a locomoção e do que as respostas bioquímicas, quando as larvas foram sujeitas ao imidacloprid. Larvas de C. riparius expostas a concentrações sub-letais de mercúrio apresentaram uma tendência de diminuição de actividade comportamental, em testes com concentrações crescentes do tóxico; o crescimento das larvas foi também prejudicado, e as taxas de emergência de adultos e o tempo de desenvolvimento apresentaram retardamento. Estes organismos podem bioacumular rapidamente o mercúrio em condições de não alimentação e apresentam uma lenta depuração deste metal. Estes efeitos podem, em último caso, conduzir a prováveis repercussões ao nível da população e das comunidades. As reduções em actividades comportamentais, mesmo em concentrações baixas, podem diminuir a quantidade de tempo gasta na procura de alimento, produzindo efeitos aos níveis morfo-fisiológicos, e assim afectar severamente o desempenho dos quironomídeos no ambiente. O uso destes factores comportamentais como um parâmetro ecotoxicológico sub-letal relevante ao nível da toxicologia aumentará a versatilidade dos testes, permitindo uma resposta comportamental mensurável e quantitativa ao nível do organismo, utilizando uma avaliação não destrutiva, e assim certificando que esta aproximação pode ser usada em testes ecotoxicológicos futuros.
Resumo:
This thesis consists of an introductory chapter (essay I) and five more empirical essays on electricity markets and CO2 spot price behaviour, derivatives pricing analysis and hedging. Essay I presents the structure of the thesis and electricity markets functioning and characteristics, as well as the type of products traded, to be analyzed on the following essays. In the second essay we conduct an empirical study on co-movements in electricity markets resorting to wavelet analysis, discussing long-term dynamics and markets integration. Essay three is about hedging performance and multiscale relationships in the German electricity spot and futures markets, also using wavelet analysis. We concentrate the investigation on the relationship between coherence evolution and hedge ratio analysis, on a time-frequency-scale approach, between spot and futures which conditions the effectiveness of the hedging strategy. Essays four, five and six are interrelated between them and with the other two previous essays given the nature of the commodity analyzed, CO2 emission allowances, traded in electricity markets. Relationships between electricity prices, primary energy fuel prices and carbon dioxide permits are analyzed on essay four. The efficiency of the European market for allowances is examined taking into account markets heterogeneity. Essay five analyzes stylized statistical properties of the recent traded asset CO2 emission allowances, for spot and futures returns, examining also the relation linking convenience yield and risk premium, for the German European Energy Exchange (EEX) between October 2005 and October 2009. The study was conducted through empirical estimations of CO2 allowances risk premium, convenience yield, and their relation. Future prices from an ex-post perspective are examined to show evidence for significant negative risk premium, or else a positive forward premium. Finally, essay six analyzes emission allowances futures hedging effectiveness, providing evidence for utility gains increases with investor’s preference over risk. Deregulation of electricity markets has led to higher uncertainty in electricity prices and by presenting these essays we try to shed new lights about structuring, pricing and hedging in this type of markets.
Resumo:
Thesis (Ph.D.)--University of Washington, 2013
Resumo:
The aim of this thesis is to price options on equity index futures with an application to standard options on S&P 500 futures traded on the Chicago Mercantile Exchange. Our methodology is based on stochastic dynamic programming, which can accommodate European as well as American options. The model accommodates dividends from the underlying asset. It also captures the optimal exercise strategy and the fair value of the option. This approach is an alternative to available numerical pricing methods such as binomial trees, finite differences, and ad-hoc numerical approximation techniques. Our numerical and empirical investigations demonstrate convergence, robustness, and efficiency. We use this methodology to value exchange-listed options. The European option premiums thus obtained are compared to Black's closed-form formula. They are accurate to four digits. The American option premiums also have a similar level of accuracy compared to premiums obtained using finite differences and binomial trees with a large number of time steps. The proposed model accounts for deterministic, seasonally varying dividend yield. In pricing futures options, we discover that what matters is the sum of the dividend yields over the life of the futures contract and not their distribution.
Resumo:
Efficient markets should guarantee the existence of zero spreads for total return swaps. However, real estate markets have recorded values that are significantly different from zero in both directions. Possible explanations might suggest non-rational behaviour by inexperienced market players or unusual features of the underlying asset market. We find that institutional characteristics in the underlying market lead to market inefficiencies and, hence, to the creation of a rational trading window with upper and lower bounds within which transactions do not offer arbitrage opportunities. Given the existence of this rational trading window, we also argue that the observed spreads can substantially be explained by trading imbalances due to the limited liquidity of a newly formed market and/or to the effect of market sentiment, complementing explanations based on the lag between underlying market returns and index returns.
Resumo:
We develop a general model to price VIX futures contracts. The model is adapted to test both the constant elasticity of variance (CEV) and the Cox–Ingersoll–Ross formulations, with and without jumps. Empirical tests on VIX futures prices provide out-of-sample estimates within 2% of the actual futures price for almost all futures maturities. We show that although jumps are present in the data, the models with jumps do not typically outperform the others; in particular, we demonstrate the important benefits of the CEV feature in pricing futures contracts. We conclude by examining errors in the model relative to the VIX characteristics
Resumo:
This paper investigates the relationship between capital flows, turnover and returns for the UK private real estate market. We examine a number of possible implication of capital flows and turnover on capital returns testing for evidence of a price pressure effect, ‘return chasing’ behaviour and information revelation. The main tool of analysis is a panel vector autoregressive (VAR) regression model in which institutional capital flows, turnover and returns are specified as endogenous variables in a two equation system in which we also control for macro-economic variables. Data on flows, turnover and returns are obtained for the 10 market segments covering the main UK commercial real estate sectors. Our results do not support the widely-held belief among practitioners that capital flows have a ‘price pressure’ effect. Although there is some evidence of return chasing behaviour, the short timescales involved suggest this finding may be due to delayed recording of flows relative to returns given the difficulties of market entry. We find a significant positive relationship between lagged turnover and contemporaneous capital returns, suggesting that asset turnover provides pricing information.
Resumo:
Practical applications of portfolio optimisation tend to proceed on a “top down” basis where funds are allocated first at asset class level (between, say, bonds, cash, equities and real estate) and then, progressively, at sub-class level (within property to sectors, office, retail, industrial for example). While there are organisational benefits from such an approach, it can potentially lead to sub-optimal allocations when compared to a “global” or “side-by-side” optimisation. This will occur where there are correlations between sub-classes across the asset divide that are masked in aggregation – between, for instance, City offices and the performance of financial services stocks. This paper explores such sub-class linkages using UK monthly stock and property data. Exploratory analysis using clustering procedures and factor analysis suggests that property performance and equity performance are distinctive: there is little persuasive evidence of contemporaneous or lagged sub-class linkages. Formal tests of the equivalence of optimised portfolios using top-down and global approaches failed to demonstrate significant differences, whether or not allocations were constrained. While the results may be a function of measurement of market returns, it is those returns that are used to assess fund performance. Accordingly, the treatment of real estate as a distinct asset class with diversification potential seems justified.
Resumo:
This paper considers the effect of short- and long-term interest rates, and interest rate spreads upon real estate index returns in the UK. Using Johansen's vector autoregressive framework, it is found that the real estate index cointegrates with the term spread, but not with the short or long rates themselves. Granger causality tests indicate that movements in short term interest rates and the spread cause movements in the returns series. However, decomposition of the forecast error variances from VAR models indicate that changes in these variables can only explain a small proportion of the overall variability of the returns, and that the effect has fully worked through after two months. The results suggest that these financial variables could potentially be used as leading indicators for real estate markets, with corresponding implications for return predictability.
Resumo:
Este trabalho tem por objetivo apresentar a fundamentação teórica e efetuar uma aplicação prática de uma das mais importantes descobertas no campo das finanças: o modelo de precificação de ativos de capital padrão, denominado de Capital Asset Price Model (CAPM). Na realização da aplicação prática, comparou-se a performance entre os retornos dos investimentos exigidos pelo referido modelo e os realmente obtidos. Foram analisadas cinco ações com a maior participação relativa na carteira teórica do Ibovespa e com retornos publicados de junho de 1998 a maio de 2001. Os dados foram obtidos da Economática da UFRGS e testados utilizando-se o Teste-t (duas amostras em par para médias) na ferramenta MS Excel. Os resultados foram tabelados e analisados, de onde se concluiu que, estatisticamente, com índice de confiança de 95%, não houve diferença de performance entre os retornos esperados e os realmente obtidos dos ativos objeto desta dissertação, no período estudado.
Resumo:
Este trabalho propõe maneiras alternativas para a estimação consistente de uma medida abstrata, crucial para o estudo de decisões intertemporais, o qual é central a grande parte dos estudos em macroeconomia e finanças: o Fator Estocástico de Descontos (SDF, sigla em Inglês). Pelo emprego da Equação de Apreçamento constrói-se um inédito estimador consistente do SDF que depende do fato de que seu logaritmo é comum a todos os ativos de uma economia. O estimador resultante é muito simples de se calcular, não depende de fortes hipóteses econômicas, é adequado ao teste de diversas especificações de preferência e para a investigação de paradoxos de substituição intertemporal, e pode ser usado como base para a construção de um estimador para a taxa livre de risco. Alternativas para a estratégia de identificação são aplicadas e um paralelo entre elas e estratégias de outras metodologias é traçado. Adicionando estrutura ao ambiente inicial, são apresentadas duas situações onde a distribuição assintótica pode ser derivada. Finalmente, as metodologias propostas são aplicadas a conjuntos de dados dos EUA e do Brasil. Especificações de preferência usualmente empregadas na literatura, bem como uma classe de preferências dependentes do estado, são testadas. Os resultados são particularmente interessantes para a economia americana. A aplicação de teste formais não rejeita especificações de preferências comuns na literatura e estimativas para o coeficiente relativo de aversão ao risco se encontram entre 1 e 2, e são estatisticamente indistinguíveis de 1. Adicionalmente, para a classe de preferência s dependentes do estado, trajetórias altamente dinâmicas são estimadas para a tal coeficiente, as trajetórias são confinadas ao intervalo [1,15, 2,05] e se rejeita a hipótese de uma trajetória constante.