919 resultados para Stock model
Resumo:
Se trabajó utilizando una metodología basada en Modelos Lineales Generalizados (MLG). La CPUE fue expresada en toneladas por duración de viaje. Las variables explicativas utilizadas fueron el año, mes, capacidad de bodega, latitud, inercia espacial y distancia a la costa. El modelo tuvo un coeficiente de determinación de 0,485, explicando casi la mitad de la variabilidad de la CPUE observada. La variable con mayor influencia en el modelo fue la capacidad de bodega (49% de la varianza explicada), debido posiblemente a que la flota anchovetera posee una capacidad elevada de captura y que los recursos pelágicos tienden a hiper-agregarse, incluso cuando están siendo fuertemente explotados. La correlación entre la CPUE estandarizada y biomasa estimada por un modelo de captura a la edad (r=0,74) indica que el método basado en MLG es recomendable para la estandarización de la CPUE. Se propone a esta CPUE como una alternativa para monitorear la biomasa de la anchoveta.
Resumo:
El manejo sostenible de pesquerías es todavía un problema abierto y la teoría de viabilidad ofrece una alternativa para determinar políticas de manejo de los recursos que garanticen la sostenibilidad, una vez definidas las restricciones que determinan los estados sostenibles del sistema. La dinámica poblacional de la anchoveta peruana se modeló usando un modelo estructurado por edades tipo Thomson–Bell con capturas discretas acoplado con el modelo de reclutamiento de Ricker, con pasos semestrales entre los años 1963–1984. Se definió además un conjunto deseable de estados sostenibles, asociado a los niveles del stock y capturas que satisfacen restricciones ecológicas, económicas y sociales previamente definidas. En base a esto se calculó el conjunto de los estados del stock para los que existe un sucesión de capturas que permiten mantenerlo en un estado sostenible (conjunto denominado núcleo de viabilidad) y una familia de conjuntos de capturas viables, que corresponden a todos los niveles de captura que se puedan aplicar sobre cada estado del stock de manera tal que éste se mantenga dentro del núcleo de viabilidad, es decir, permanezca en un estado sostenible. Se encontró una condición suficiente para la existencia de un núcleo de viabilidad no vacío: que la cuota social (captura mínima para mantener en funcionamiento la pesquería) sea menor a un desembarque de 915 800 t semestrales. Se comparó la serie histórica de capturas con las obtenidas a partir de la teoría de viabilidad para el periodo 1963 - 1984, encontrándose que hubo sobrepesca desde finales de 1968, lo que conllevó al colapso de la pesquería durante El Niño de 1972-1973. A partir de los resultados de viabilidad, se definieron 5 estrategias de manejo pesquero (E1–E5) para la anchoveta peruana, concluyéndose que la estrategia precautoria viable media (E5) hubiera podido evitar el colapso de la pesquería de anchoveta, manteniendo además niveles aceptables de pesca. Además, la estrategia precautoria del ICES (E2) no aseguró la sostenibilidad del stock durante los periodos El Niño. Además, se concluye que hubiera sido necesaria una veda de un año después del colapso de la pesquería para que el stock regresara al núcleo de viabilidad, posibilitando un manejo sostenible en adelante. La teoría de la viabilidad, con el núcleo de viabilidad y las capturas viables asociadas, resultaron ser herramientas útiles para el diseño de estrategias de manejo que aseguran la sostenibilidad de los recursos pesqueros.
Resumo:
This thesis aims to investigate pricing of liquidity risks in London Stock Exchange. Liquidity Adjusted Capital Asset Pricing Model i.e. LCAPM developed by Acharya and Pedersen (2005) is being applied to test the influence of various liquidity risks on stock returns in London Stock Exchange. The Liquidity Adjusted Capital Asset Pricing model provides a unified framework for the testing of liquidity risks. All the common stocks listed and delisted for the period of 2000 to 2014 are included in the data sample. The study has incorporated three different measures of liquidity – Percent Quoted Spread, Amihud (2002) and Turnover. The reason behind the application of three different liquidity measures is the multi-dimensional nature of liquidity. Firm fixed effects panel regression is applied for the estimation of LCAPM. However, the results are robust according to Fama-Macbeth regressions. The results of the study indicates that liquidity risks in the form of (i) level of liquidity, (ii) commonality in liquidity (iii) flight to liquidity, (iv) depressed wealth effect and market return as well as aggregate liquidity risk are priced at London Stock Exchange. However, the results are sensitive to the choice of liquidity measures.
Resumo:
The aim of this thesis is to price options on equity index futures with an application to standard options on S&P 500 futures traded on the Chicago Mercantile Exchange. Our methodology is based on stochastic dynamic programming, which can accommodate European as well as American options. The model accommodates dividends from the underlying asset. It also captures the optimal exercise strategy and the fair value of the option. This approach is an alternative to available numerical pricing methods such as binomial trees, finite differences, and ad-hoc numerical approximation techniques. Our numerical and empirical investigations demonstrate convergence, robustness, and efficiency. We use this methodology to value exchange-listed options. The European option premiums thus obtained are compared to Black's closed-form formula. They are accurate to four digits. The American option premiums also have a similar level of accuracy compared to premiums obtained using finite differences and binomial trees with a large number of time steps. The proposed model accounts for deterministic, seasonally varying dividend yield. In pricing futures options, we discover that what matters is the sum of the dividend yields over the life of the futures contract and not their distribution.
Resumo:
This paper assesses the empirical performance of an intertemporal option pricing model with latent variables which generalizes the Hull-White stochastic volatility formula. Using this generalized formula in an ad-hoc fashion to extract two implicit parameters and forecast next day S&P 500 option prices, we obtain similar pricing errors than with implied volatility alone as in the Hull-White case. When we specialize this model to an equilibrium recursive utility model, we show through simulations that option prices are more informative than stock prices about the structural parameters of the model. We also show that a simple method of moments with a panel of option prices provides good estimates of the parameters of the model. This lays the ground for an empirical assessment of this equilibrium model with S&P 500 option prices in terms of pricing errors.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the 1976-1992 period. We also test a conditional APT model by using the difference between the 30-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from a total of 25 securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be crucial for the appropriate pricing of the portfolios.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the 1976-1992 period. We also test a conditional APT model by using the difference between the 30-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. the conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from a total of 25 securities exchanged on the Brazilian markets. the inclusion of this second factor proves to be crucial for the appropriate pricing of the portfolios.
Resumo:
Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.
Resumo:
A study focusing on the identification of return generating factors and to the extent of their influence on share prices the outcome will be a tool for investment analysis in the hands of investors portfolio managers and mutual funds who are mostly concerned with changing share prices. Since the study takes into account the influence of macroeconomic variables on variations in share returns by using the outcome the government can frame out suitable policies on long term basis and that will help in nurturing a healthy economy and resultant stock market. As every company management tries to maximize the wealth of the share holders a clear idea about the return generating variables and their influence will help the management to frame various policies to maximize the wealth of the shareholders.
Resumo:
Stock markets employ specialized traders, market-makers, designed to provide liquidity and volume to the market by constantly supplying both supply and demand. In this paper, we demonstrate a novel method for modeling the market as a dynamic system and a reinforcement learning algorithm that learns profitable market-making strategies when run on this model. The sequence of buys and sells for a particular stock, the order flow, we model as an Input-Output Hidden Markov Model fit to historical data. When combined with the dynamics of the order book, this creates a highly non-linear and difficult dynamic system. Our reinforcement learning algorithm, based on likelihood ratios, is run on this partially-observable environment. We demonstrate learning results for two separate real stocks.
Resumo:
Financial integration has been pursued aggressively across the globe in the last fifty years; however, there is no conclusive evidence on the diversification gains (or losses) of such efforts. These gains (or losses) are related to the degree of comovements and synchronization among increasingly integrated global markets. We quantify the degree of comovements within the integrated Latin American market (MILA). We use dynamic correlation models to quantify comovements across securities as well as a direct integration measure. Our results show an increase in comovements when we look at the country indexes, however, the increase in the trend of correlation is previous to the institutional efforts to establish an integrated market in the region. On the other hand, when we look at sector indexes and an integration measure, we find a decreased in comovements among a representative sample of securities form the integrated market.
Resumo:
Using UK equity index data, this paper considers the impact of news on time varying measures of beta, the usual measure of undiversifiable risk. The empirical model implies that beta depends on news about the market and news about the sector. The asymmetric response of beta to news about the market is consistent across all sectors considered. Recent research is divided as to whether abnormalities in equity returns arise from changes in expected returns in an efficient market or over-reactions to new information. The evidence suggests that such abnormalities may be due to changes in expected returns caused by time-variation and asymmetry in beta.
Resumo:
If stock and stock index futures markets are functioning properly price movements in these markets should best be described by a first order vector error correction model with the error correction term being the price differential between the two markets (the basis). Recent evidence suggests that there are more dynamics present than should be in effectively functioning markets. Using self-exciting threshold autoregressive (SETAR) models, this study analyses whether such dynamics can be related to different regimes within which the basis can fluctuate in a predictable manner without triggering arbitrage. These findings reveal that the basis shows strong evidence of autoregressive behaviour when its value is between the two thresholds but that the extra dynamics disappear once the basis moves above the upper threshold and their persistence is reduced, although not eradicated, once the basis moves below the lower threshold. This suggests that once nonlinearity associated with transactions costs is accounted for, stock and stock index futures markets function more effectively than is suggested by linear models of the pricing relationship.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
In the absence of market frictions, the cost-of-carry model of stock index futures pricing predicts that returns on the underlying stock index and the associated stock index futures contract will be perfectly contemporaneously correlated. Evidence suggests, however, that this prediction is violated with clear evidence that the stock index futures market leads the stock market. It is argued that traditional tests, which assume that the underlying data generating process is constant, might be prone to overstate the lead-lag relationship. Using a new test for lead-lag relationships based on cross correlations and cross bicorrelations it is found that, contrary to results from using the traditional methodology, periods where the futures market leads the cash market are few and far between and when any lead-lag relationship is detected, it does not last long. Overall, the results are consistent with the prediction of the standard cost-of-carry model and market efficiency.