777 resultados para Volatility of volatility
Resumo:
El trabajo analiza los precios de la electricidad del mercado diario español. En el mismo se realiza un estudio del mercado eléctrico español y sus componentes junto con un análisis de los cambios regulatorios más significativos durante el periodo muestral. A partir de allí, se analiza la muestra a través de los estadísticos descriptivos principales. Luego se presenta un modelo general que es analizado a partir de los resultados empíricos principales obtenidos con su estimación. Finalmente, se realizan ajustes al mismo para obtener un modelo simplificado que se ajuste mejor a lo que se quiere conseguir, que es analizar la evolución de los precios de la electricidad. Los resultados del ajuste arrojan que los precios horarios dependen en su mayor parte de los precios de las horas anteriores. También que el modelo recoge muy bien la estacionalidad mensual y horaria que presenta la muestra. Por otro lado, características de la serie de precios como son la volatilidad y los saltos no quedan bien recogidos por el modelo, lo que lleva a plantearse la búsqueda de modelos alternativos.
Resumo:
Trace volatile organic compounds emitted by biogenic and anthropogenic sources into the atmosphere can undergo extensive photooxidation to form species with lower volatility. By equilibrium partitioning or reactive uptake, these compounds can nucleate into new aerosol particles or deposit onto already-existing particles to form secondary organic aerosol (SOA). SOA and other atmospheric particulate matter have measurable effects on global climate and public health, making understanding SOA formation a needed field of scientific inquiry. SOA formation can be done in a laboratory setting, using an environmental chamber; under these controlled conditions it is possible to generate SOA from a single parent compound and study the chemical composition of the gas and particle phases. By studying the SOA composition, it is possible to gain understanding of the chemical reactions that occur in the gas phase and particle phase, and identify potential heterogeneous processes that occur at the surface of SOA particles. In this thesis, mass spectrometric methods are used to identify qualitatively and qualitatively the chemical components of SOA derived from the photooxidation of important anthropogenic volatile organic compounds that are associated with gasoline and diesel fuels and industrial activity (C12 alkanes, toluene, and o-, m-, and p-cresols). The conditions under which SOA was generated in each system were varied to explore the effect of NOx and inorganic seed composition on SOA chemical composition. The structure of the parent alkane was varied to investigate the effect on the functionalization and fragmentation of the resulting oxidation products. Relative humidity was varied in the alkane system as well to measure the effect of increased particle-phase water on condensed-phase reactions. In all systems, oligomeric species, resulting potentially from particle-phase and heterogeneous processes, were identified. Imines produced by reactions between (NH4)2SO4 seed and carbonyl compounds were identified in all systems. Multigenerational photochemistry producing low- and extremely low-volatility organic compounds (LVOC and ELVOC) was reflected strongly in the particle-phase composition as well.
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.
Resumo:
Com o crescimento do mundo corporativo, cresceram também as dificuldades de representar de forma fidedigna as diversas faces da entidade. Neste ponto também estão inclusas as obrigações, que não estão restritas a pagamentos de obrigações predeterminadas em contratos, regidas por leis ou outros instrumentos. Assim, a dinâmica empresarial cria diversas exigibilidades para a entidade, que muitas vezes não possui instrumentos apropriados para seu reconhecimento, mensuração e divulgação. Neste contexto, diversos agentes buscam criar normas que uniformizem as informações divulgadas e garantam padrões mínimos às informações divulgadas. Assim, o fez a Comissão de Valores Mobiliários ao publicar a Deliberação CVM n 489 de 2005 versando sobre provisões, passivos, contingências passivas e contingências ativas. Com isso, além de uniformizar internamente este tema, a CVM procurou alinhamento com as Normas Internacionais de Contabilidade, pois inspirou sua deliberação na norma emitida pelo IASB em 1998, a IAS 37. A alta carga tributária e a volatilidade do nosso sistema tributário contribuem para um ambiente de incertezas e litígios entre os sujeitos ativos e passivos das obrigações tributárias, levando a disputas sobre valores relevantes e que podem comprometer a saúde financeira da entidade. Por isso, a publicação da norma emitida pela CVM é ferramenta útil e indispensável a uma divulgação uniforme e transparente pelas empresas. Desta forma, este estudo busca contribuir para a solução do problema da divulgação das contingências ao rever a literatura pertinente, analisar as demonstrações contábeis das quatro empresas significativas e sugerir pontos de melhoria nas demonstrações contábeis.
Resumo:
Esta dissertação aplica a regularização por entropia máxima no problema inverso de apreçamento de opções, sugerido pelo trabalho de Neri e Schneider em 2012. Eles observaram que a densidade de probabilidade que resolve este problema, no caso de dados provenientes de opções de compra e opções digitais, pode ser descrito como exponenciais nos diferentes intervalos da semireta positiva. Estes intervalos são limitados pelos preços de exercício. O critério de entropia máxima é uma ferramenta poderosa para regularizar este problema mal posto. A família de exponencial do conjunto solução, é calculado usando o algoritmo de Newton-Raphson, com limites específicos para as opções digitais. Estes limites são resultados do princípio de ausência de arbitragem. A metodologia foi usada em dados do índice de ação da Bolsa de Valores de São Paulo com seus preços de opções de compra em diferentes preços de exercício. A análise paramétrica da entropia em função do preços de opções digitais sínteticas (construídas a partir de limites respeitando a ausência de arbitragem) mostraram valores onde as digitais maximizaram a entropia. O exemplo de extração de dados do IBOVESPA de 24 de janeiro de 2013, mostrou um desvio do princípio de ausência de arbitragem para as opções de compra in the money. Este princípio é uma condição necessária para aplicar a regularização por entropia máxima a fim de obter a densidade e os preços. Nossos resultados mostraram que, uma vez preenchida a condição de convexidade na ausência de arbitragem, é possível ter uma forma de smile na curva de volatilidade, com preços calculados a partir da densidade exponencial do modelo. Isto coloca o modelo consistente com os dados do mercado. Do ponto de vista computacional, esta dissertação permitiu de implementar, um modelo de apreçamento que utiliza o princípio de entropia máxima. Três algoritmos clássicos foram usados: primeiramente a bisseção padrão, e depois uma combinação de metodo de bisseção com Newton-Raphson para achar a volatilidade implícita proveniente dos dados de mercado. Depois, o metodo de Newton-Raphson unidimensional para o cálculo dos coeficientes das densidades exponenciais: este é objetivo do estudo. Enfim, o algoritmo de Simpson foi usado para o calculo integral das distribuições cumulativas bem como os preços do modelo obtido através da esperança matemática.
Resumo:
We define a copula process which describes the dependencies between arbitrarily many random variables independently of their marginal distributions. As an example, we develop a stochastic volatility model, Gaussian Copula Process Volatility (GCPV), to predict the latent standard deviations of a sequence of random variables. To make predictions we use Bayesian inference, with the Laplace approximation, and with Markov chain Monte Carlo as an alternative. We find both methods comparable. We also find our model can outperform GARCH on simulated and financial data. And unlike GARCH, GCPV can easily handle missing data, incorporate covariates other than time, and model a rich class of covariance structures.
Resumo:
As many industrial organizations have learned to apply roadmapping successfully, they have also learned that it is "roadmapping" rather than "the roadmap" that generates value. This two-part special report has focused primarily on product and technology roadmapping in industry. The first part (RTM, March-April 2003, pp. 26-59) examined the workings of the process at Lucent Technologies, Rockwell Automation, the pharmaceutical/biotechnology industry, and United Kingdom-based Domino Printing Sciences. This second part examines roadmapping in the UK, Motorola, General Motors, the services sector, and in cases that demand major investment decisions under conditions of volatility.
Resumo:
We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the non-parametric flexibility of Gaussian processes. This model accommodates input dependent signal and noise correlations between multiple response variables, input dependent length-scales and amplitudes, and heavy-tailed predictive distributions. We derive both efficient Markov chain Monte Carlo and variational Bayes inference procedures for this model. We apply GPRN as a multiple output regression and multivariate volatility model, demonstrating substantially improved performance over eight popular multiple output (multi-task) Gaussian process models and three multivariate volatility models on benchmark datasets, including a 1000 dimensional gene expression dataset.
Resumo:
The presence of liquid fuel inside the engine cylinder is believed to be a strong contributor to the high levels of hydrocarbon emissions from spark ignition (SI) engines during the warm-up period. Quantifying and determining the fate of the liquid fuel that enters the cylinder is the first step in understanding the process of emissions formation. This work uses planar laser induced fluorescence (PLIF) to visualize the liquid fuel present in the cylinder. The fluorescing compounds in indolene, and mixtures of iso-octane with dopants of different boiling points (acetone and 3-pentanone) were used to trace the behavior of different volatility components. Images were taken of three different planes through the engine intersecting the intake valve region. A closed valve fuel injection strategy was used, as this is the strategy most commonly used in practice. Background subtraction and masking were both performed to reduce the effect of any spurious fluorescence. The images were analyzed on both a time and crank angle (CA) basis, showing the time of maximum liquid fuel present in the cylinder and the effect of engine events on the inflow of liquid fuel. The results show details of the liquid fuel distribution as it enters the engine as a function of crankangle degree, volatility and location in the cylinder. A. semi-quantitative analysis based on the integration of the image intensities provides additional information on the temporal distribution of the liquid fuel flow. © 1998 Society of Automotive Engineers, Inc.
Resumo:
An important first step in spray combustion simulation is an accurate determination of the fuel properties which affects the modelling of spray formation and reaction. In a practical combustion simulation, the implementation of a multicomponent model is important in capturing the relative volatility of different fuel components. A Discrete Multicomponent (DM) model is deemed to be an appropriate candidate to model a composite fuel like biodiesel which consists of four components of fatty acid methyl esters (FAME). In this paper, the DM model is compared with the traditional Continuous Thermodynamics (CTM) model for both diesel and biodiesel. The CTM model is formulated based on mixing rules that incorporate the physical and thermophysical properties of pure components into a single continuous surrogate for the composite fuel. The models are implemented within the open-source CFD code OpenFOAM, and a semi-quantitative comparison is made between the predicted spray-combustion characteristics and optical measurements of a swirl-stabilised flame of diesel and biodiesel. The DM model performs better than the CTM model in predicting a higher magnitude of heat release rate in the top flame brush region of the biodiesel flame compared to that of the diesel flame. Using both the DM and CTM models, the simulation successfully reproduces the droplet size, volume flux, and droplet density profiles of diesel and biodiesel. The DM model predicts a longer spray penetration length for biodiesel compared to that of diesel, as seen in the experimental data. Also, the DM model reproduces a segregated biodiesel fuel vapour field and spray in which the most abundant FAME component has the longest vapour penetration. In the biodiesel flame, the relative abundance of each fuel component is found to dominate over the relative volatility in terms of the vapour species distribution and vice versa in the liquid species distribution. © 2014 Elsevier Ltd. All rights reserved.