942 resultados para equilibrium asset pricing models with latent variables


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The flow dynamics of crystal-rich high-viscosity magma is likely to be strongly influenced by viscous and latent heat release. Viscous heating is observed to play an important role in the dynamics of fluids with temperature-dependent viscosities. The growth of microlite crystals and the accompanying release of latent heat should play a similar role in raising fluid temperatures. Earlier models of viscous heating in magmas have shown the potential for unstable (thermal runaway) flow as described by a Gruntfest number, using an Arrhenius temperature dependence for the viscosity, but have not considered crystal growth or latent heating. We present a theoretical model for magma flow in an axisymmetric conduit and consider both heating effects using Finite Element Method techniques. We consider a constant mass flux in a 1-D infinitesimal conduit segment with isothermal and adiabatic boundary conditions and Newtonian and non-Newtonian magma flow properties. We find that the growth of crystals acts to stabilize the flow field and make the magma less likely to experience a thermal runaway. The additional heating influences crystal growth and can counteract supercooling from degassing-induced crystallization and drive the residual melt composition back towards the liquidus temperature. We illustrate the models with results generated using parameters appropriate for the andesite lava dome-forming eruption at Soufriere Hills Volcano, Montserrat. These results emphasize the radial variability of the magma. Both viscous and latent heating effects are shown to be capable of playing a significant role in the eruption dynamics of Soufriere Hills Volcano. Latent heating is a factor in the top two kilometres of the conduit and may be responsible for relatively short-term (days) transients. Viscous heating is less restricted spatially, but because thermal runaway requires periods of hundreds of days to be achieved, the process is likely to be interrupted. Our models show that thermal evolution of the conduit walls could lead to an increase in the effective diameter of flow and an increase in flux at constant magma pressure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A flux-difference splitting method is presented for the inviscid terms of the compressible flow equations for chemical non-equilibrium gases

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-factor approaches to analysis of real estate returns have, since the pioneering work of Chan, Hendershott and Sanders (1990), emphasised a macro-variables approach in preference to the latent factor approach that formed the original basis of the arbitrage pricing theory. With increasing use of high frequency data and trading strategies and with a growing emphasis on the risks of extreme events, the macro-variable procedure has some deficiencies. This paper explores a third way, with the use of an alternative to the standard principal components approach – independent components analysis (ICA). ICA seeks higher moment independence and maximises in relation to a chosen risk parameter. We apply an ICA based on kurtosis maximisation to weekly US REIT data using a kurtosis maximising algorithm. The results show that ICA is successful in capturing the kurtosis characteristics of REIT returns, offering possibilities for the development of risk management strategies that are sensitive to extreme events and tail distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop a general model to price VIX futures contracts. The model is adapted to test both the constant elasticity of variance (CEV) and the Cox–Ingersoll–Ross formulations, with and without jumps. Empirical tests on VIX futures prices provide out-of-sample estimates within 2% of the actual futures price for almost all futures maturities. We show that although jumps are present in the data, the models with jumps do not typically outperform the others; in particular, we demonstrate the important benefits of the CEV feature in pricing futures contracts. We conclude by examining errors in the model relative to the VIX characteristics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many numerical models for weather prediction and climate studies are run at resolutions that are too coarse to resolve convection explicitly, but too fine to justify the local equilibrium assumed by conventional convective parameterizations. The Plant-Craig (PC) stochastic convective parameterization scheme, developed in this paper, solves this problem by removing the assumption that a given grid-scale situation must always produce the same sub-grid-scale convective response. Instead, for each timestep and gridpoint, one of the many possible convective responses consistent with the large-scale situation is randomly selected. The scheme requires as input the large-scale state as opposed to the instantaneous grid-scale state, but must nonetheless be able to account for genuine variations in the largescale situation. Here we investigate the behaviour of the PC scheme in three-dimensional simulations of radiative-convective equilibrium, demonstrating in particular that the necessary space-time averaging required to produce a good representation of the input large-scale state is not in conflict with the requirement to capture large-scale variations. The resulting equilibrium profiles agree well with those obtained from established deterministic schemes, and with corresponding cloud-resolving model simulations. Unlike the conventional schemes the statistics for mass flux and rainfall variability from the PC scheme also agree well with relevant theory and vary appropriately with spatial scale. The scheme is further shown to adapt automatically to changes in grid length and in forcing strength.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ice cloud representation in general circulation models remains a challenging task, due to the lack of accurate observations and the complexity of microphysical processes. In this article, we evaluate the ice water content (IWC) and ice cloud fraction statistical distributions from the numerical weather prediction models of the European Centre for Medium-Range Weather Forecasts (ECMWF) and the UK Met Office, exploiting the synergy between the CloudSat radar and CALIPSO lidar. Using the last three weeks of July 2006, we analyse the global ice cloud occurrence as a function of temperature and latitude and show that the models capture the main geographical and temperature-dependent distributions, but overestimate the ice cloud occurrence in the Tropics in the temperature range from −60 °C to −20 °C and in the Antarctic for temperatures higher than −20 °C, but underestimate ice cloud occurrence at very low temperatures. A global statistical comparison of the occurrence of grid-box mean IWC at different temperatures shows that both the mean and range of IWC increases with increasing temperature. Globally, the models capture most of the IWC variability in the temperature range between −60 °C and −5 °C, and also reproduce the observed latitudinal dependencies in the IWC distribution due to different meteorological regimes. Two versions of the ECMWF model are assessed. The recent operational version with a diagnostic representation of precipitating snow and mixed-phase ice cloud fails to represent the IWC distribution in the −20 °C to 0 °C range, but a new version with prognostic variables for liquid water, ice and snow is much closer to the observed distribution. The comparison of models and observations provides a much-needed analysis of the vertical distribution of IWC across the globe, highlighting the ability of the models to reproduce much of the observed variability as well as the deficiencies where further improvements are required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper compares the performance of artificial neural networks (ANNs) with that of the modified Black model in both pricing and hedging Short Sterling options. Using high frequency data, standard and hybrid ANNs are trained to generate option prices. The hybrid ANN is significantly superior to both the modified Black model and the standard ANN in pricing call and put options. Hedge ratios for hedging Short Sterling options positions using Short Sterling futures are produced using the standard and hybrid ANN pricing models, the modified Black model, and also standard and hybrid ANNs trained directly on the hedge ratios. The performance of hedge ratios from ANNs directly trained on actual hedge ratios is significantly superior to those based on a pricing model, and to the modified Black model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider forecasting with factors, variables and both, modeling in-sample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulse-indicator saturation. A forecast-error taxonomy for factor models highlights the impacts of location shifts on forecast-error biases. Forecasting US GDP over 1-, 4- and 8-step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or short-term forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies, such as intercept corrections or differencing, when location shifts occur as in the recent financial crisis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium-correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, impulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods are likely to perform well. The robust methods are applied to forecasting US GDP using autoregressive models, and also to autoregressive models with factors extracted from a large dataset of macroeconomic variables. We consider forecasting performance over the Great Recession, and over an earlier more quiescent period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Satellite based top-of-atmosphere (TOA) and surface radiation budget observations are combined with mass corrected vertically integrated atmospheric energy divergence and tendency from reanalysis to infer the regional distribution of the TOA, atmospheric and surface energy budget terms over the globe. Hemispheric contrasts in the energy budget terms are used to determine the radiative and combined sensible and latent heat contributions to the cross-equatorial heat transports in the atmosphere (AHT_EQ) and ocean (OHT_EQ). The contrast in net atmospheric radiation implies an AHT_EQ from the northern hemisphere (NH) to the southern hemisphere (SH) (0.75 PW), while the hemispheric difference in sensible and latent heat implies an AHT_EQ in the opposite direction (0.51 PW), resulting in a net NH to SH AHT_EQ (0.24 PW). At the surface, the hemispheric contrast in the radiative component (0.95 PW) dominates, implying a 0.44 PW SH to NH OHT_EQ. Coupled model intercomparison project phase 5 (CMIP5) models with excessive net downward surface radiation and surface-to-atmosphere sensible and latent heat transport in the SH relative to the NH exhibit anomalous northward AHT_EQ and overestimate SH tropical precipitation. The hemispheric bias in net surface radiative flux is due to too much longwave surface radiative cooling in the NH tropics in both clear and all-sky conditions and excessive shortwave surface radiation in the SH subtropics and extratropics due to an underestimation in reflection by clouds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The deterpenation of bergamot essential oil can be performed by liquid liquid extraction using hydrous ethanol as the solvent. A ternary mixture composed of 1-methyl-4-prop-1-en-2-yl-cydohexene (limonene), 3,7-dimethylocta-1,6-dien-3-yl-acetate (linalyl acetate), and 3,7-dimethylocta-1,6-dien-3-ol (linalool), three major compounds commonly found in bergamot oil, was used to simulate this essential oil. Liquid liquid equilibrium data were experimentally determined for systems containing essential oil compounds, ethanol, and water at 298.2 K and are reported in this paper. The experimental data were correlated using the NRTL and UNIQUAC models, and the mean deviations between calculated and experimental data were lower than 0.0062 in all systems, indicating the good descriptive quality of the molecular models. To verify the effect of the water mass fraction in the solvent and the linalool mass fraction in the terpene phase on the distribution coefficients of the essential oil compounds, nonlinear regression analyses were performed, obtaining mathematical models with correlation coefficient values higher than 0.99. The results show that as the water content in the solvent phase increased, the kappa value decreased, regardless of the type of compound studied. Conversely, as the linalool content increased, the distribution coefficients of hydrocarbon terpene and ester also increased. However, the linalool distribution coefficient values were negatively affected when the terpene alcohol content increased in the terpene phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquid-liquid equilibrium experimental data for refined sunflower seed oil, artificially acidified with commercial oleic acid or commercial linoleic acid and a solvent (ethanol + water), were determined at 298.2 K. This set of experimental data and the experimental data from Cuevas et al.,(1) which were obtained from (283.2 to 333.2) K, for degummed sunflower seed oil-containing systems were correlated using NRTL and UNIQUAC models with temperature-dependent binary parameters. The deviation between experimental and calculated compositions presented average values of (1.13 and 1.41) % for NRTL and UNIQUAC equations, respectively, indicating that the models were able to correctly describe the behavior of compounds under different temperature and solvent hydration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.