940 resultados para one-boson-exchange models
Resumo:
We study the empirical performance of the classical minimum-variance hedging strategy, comparing several econometric models for estimating hedge ratios of crude oil, gasoline and heating oil crack spreads. Given the great variability and large jumps in both spot and futures prices, considerable care is required when processing the relevant data and accounting for the costs of maintaining and re-balancing the hedge position. We find that the variance reduction produced by all models is statistically and economically indistinguishable from the one-for-one “naïve” hedge. However, minimum-variance hedging models, especially those based on GARCH, generate much greater margin and transaction costs than the naïve hedge. Therefore we encourage hedgers to use a naïve hedging strategy on the crack spread bundles now offered by the exchange; this strategy is the cheapest and easiest to implement. Our conclusion contradicts the majority of the existing literature, which favours the implementation of GARCH-based hedging strategies.
Resumo:
A method to solve a quasi-geostrophic two-layer model including the variation of static stability is presented. The divergent part of the wind is incorporated by means of an iterative procedure. The procedure is rather fast and the time of computation is only 60–70% longer than for the usual two-layer model. The method of solution is justified by the conservation of the difference between the gross static stability and the kinetic energy. To eliminate the side-boundary conditions the experiments have been performed on a zonal channel model. The investigation falls mainly into three parts: The first part (section 5) contains a discussion of the significance of some physically inconsistent approximations. It is shown that physical inconsistencies are rather serious and for these inconsistent models which were studied the total kinetic energy increased faster than the gross static stability. In the next part (section 6) we are studying the effect of a Jacobian difference operator which conserves the total kinetic energy. The use of this operator in two-layer models will give a slight improvement but probably does not have any practical use in short periodic forecasts. It is also shown that the energy-conservative operator will change the wave-speed in an erroneous way if the wave-number or the grid-length is large in the meridional direction. In the final part (section 7) we investigate the behaviour of baroclinic waves for some different initial states and for two energy-consistent models, one with constant and one with variable static stability. According to the linear theory the waves adjust rather rapidly in such a way that the temperature wave will lag behind the pressure wave independent of the initial configuration. Thus, both models give rise to a baroclinic development even if the initial state is quasi-barotropic. The effect of the variation of static stability is very small, qualitative differences in the development are only observed during the first 12 hours. For an amplifying wave we will get a stabilization over the troughs and an instabilization over the ridges.
Resumo:
There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.
Resumo:
We consider the Dirichlet boundary-value problem for the Helmholtz equation in a non-locally perturbed half-plane. This problem models time-harmonic electromagnetic scattering by a one-dimensional, infinite, rough, perfectly conducting surface; the same problem arises in acoustic scattering by a sound-soft surface. ChandlerWilde & Zhang have suggested a radiation condition for this problem, a generalization of the Rayleigh expansion condition for diffraction gratings, and uniqueness of solution has been established. Recently, an integral equation formulation of the problem has also been proposed and, in the special case when the whole boundary is both Lyapunov and a small perturbation of a flat boundary, the unique solvability of this integral equation has been shown by Chandler-Wilde & Ross by operator perturbation arguments. In this paper we study the general case, with no limit on surface amplitudes or slopes, and show that the same integral equation has exactly one solution in the space of bounded and continuous functions for all wavenumbers. As an important corollary we prove that, for a variety of incident fields including the incident plane wave, the Dirichlet boundary-value problem for the scattered field has a unique solution.
Resumo:
Middle-atmosphere models commonly employ a sponge layer in the upper portion of their domain. It is shown that the relaxational nature of the sponge allows it to couple to the dynamics at lower levels in an artificial manner. In particular, the long-term zonally symmetric response to an imposed extratropical local force or diabatic heating is shown to induce a drag force in the sponge that modifies the response expected from the “downward control” arguments of Haynes et al. [1991]. In the case of an imposed local force the sponge acts to divert a fraction of the mean meridional mass flux upward, which for realistic parameter values is approximately equal to exp(−Δz/H), where Δz is the distance between the forcing region and the sponge layer and H is the density scale height. This sponge-induced upper cell causes temperature changes that, just below the sponge layer, are of comparable magnitude to those just below the forcing region. In the case of an imposed local diabatic heating, the sponge induces a meridional circulation extending through the entire depth of the atmosphere. This circulation causes temperature changes that, just below the sponge layer, are of opposite sign and comparable in magnitude to those at the heating region. In both cases, the sponge-induced temperature changes are essentially independent of the height of the imposed force or diabatic heating, provided the latter is located outside the sponge, but decrease exponentially as one moves down from the sponge. Thus the effect of the sponge can be made arbitrarily small at a given altitude by placing the sponge sufficiently high; e.g., its effect on temperatures two scale heights below is roughly at the 10% level, provided the imposed force or diabatic heating is located outside the sponge. When, however, an imposed force is applied within the sponge layer (a highly plausible situation for parameterized mesospheric gravity-wave drag), its effect is almost entirely nullified by the sponge-layer feedback and its expected impact on temperatures below largely fails to materialize. Simulations using a middle-atmosphere general circulation model are described, which demonstrate that this sponge-layer feedback can be a significant effect in parameter regimes of physical interest. Zonally symmetric (two dimensional) middle-atmosphere models commonly employ a Rayleigh drag throughout the model domain. It is shown that the long-term zonally symmetric response to an imposed extratropical local force or diabatic heating, in this case, is noticeably modified from that expected from downward control, even for a very weak drag coefficient
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Activities like the Coupled Model Intercomparison Project (CMIP) have revolutionized climate modelling in terms of our ability to compare models and to process information about climate projections and their uncertainties. The evaluation of models against observations is now considered a key component of multi-model studies. While there are a number of outstanding scientific issues surrounding model evaluation, notably the open question of how to link model performance to future projections, here we highlight a specific but growing problem in model evaluation - that of uncertainties in the observational data that are used to evaluate the models. We highlight the problem using an example obtained from studies of the South Asian Monsoon but we believe the problem is a generic one which arises in many different areas of climate model evaluation and which requires some attention by the community.
Resumo:
In the rodent forebrain GABAergic neurons are generated from progenitor cells that express the transcription factors Dlx1 and Dlx2. The Rap-1 guanine nucleotide exchange factor, MR-GEF, is turned on by many of these developing GABAergic neurons. Expression of both Dlx1/2 and MR-GEF is retained in both adult mouse and human forebrain where, in human, decreased Dlx1 expression has been associated with psychosis. Using in situ hybridization studies we show that MR-GEF expression is significantly down-regulated in the forebrain of Dlx1/2 double mutant mice suggesting that MR-GEF and Dlx1/2 form part of a common signalling pathway during GABAergic neuronal development. We therefore compared MR-GEF expression by in situ hybridization in individuals with major psychiatric disorders (schizophrenia, bipolar disorder, major depression) and control individuals. We observed a significant positive correlation between layers II and IV of the dorso-lateral prefrontal cortex (DLPFC) in the percentage of MR-GEF expressing neurons in individuals with bipolar disorder, but not in individuals with schizophrenia, major depressive disorder or in controls. Since MR-GEF encodes a Rap1 GEF able to activate G-protein signalling, we suggest that changes in MR-GEF expression could potentially influence neurotransmission.
Resumo:
HD (Huntington's disease) is a late onset heritable neurodegenerative disorder that is characterized by neuronal dysfunction and death, particularly in the cerebral cortex and medium spiny neurons of the striatum. This is followed by progressive chorea, dementia and emotional dysfunction, eventually resulting in death. HD is caused by an expanded CAG repeat in the first exon of the HD gene that results in an abnormally elongated polyQ (polyglutamine) tract in its protein product, Htt (Huntingtin). Wild-type Htt is largely cytoplasmic; however, in HD, proteolytic N-terminal fragments of Htt form insoluble deposits in both the cytoplasm and nucleus, provoking the idea that mutHtt (mutant Htt) causes transcriptional dysfunction. While a number of specific transcription factors and co-factors have been proposed as mediators of mutHtt toxicity, the causal relationship between these Htt/transcription factor interactions and HD pathology remains unknown. Previous work has highlighted REST [RE1 (repressor element 1)-silencing transcription factor] as one such transcription factor. REST is a master regulator of neuronal genes, repressing their expression. Many of its direct target genes are known or suspected to have a role in HD pathogenesis, including BDNF (brain-derived neurotrophic factor). Recent evidence has also shown that REST regulates transcription of regulatory miRNAs (microRNAs), many of which are known to regulate neuronal gene expression and are dysregulated in HD. Thus repression of miRNAs constitutes a second, indirect mechanism by which REST can alter the neuronal transcriptome in HD. We will describe the evidence that disruption to the REST regulon brought about by a loss of interaction between REST and mutHtt may be a key contributory factor in the widespread dysregulation of gene expression in HD.
Resumo:
We have performed a screen combining subtractive hybridization with PCR to isolate genes that are regulated when neuroepithelial (NE) cells differentiate into neurons. From this screen, we have isolated a number of known genes that have not previously been associated with neurogenesis, together with several novel genes. Here we report that one of these genes, encoding a guanine nucleotide exchange factor (GEF), is regulated during the differentiation of distinct neuronal populations. We have cloned both rat and mouse GEF genes and shown that they are orthologs of the human gene, MR-GEF, which encodes a GEF that specifically activates the small GTPase, Rap1. We have therefore named the rat gene rat mr-gef (rmr-gef) and the mouse gene mouse mr-gef (mmr-gef). Here, we will collectively refer to these two rodent genes as mr-gef. Expression studies show that mr-gef is expressed by young neurons of the developing rodent CNS but not by progenitor cells in the ventricular zone (VZ). The expression pattern of mr-gef during early telencephalic neurogenesis is strikingly similar to that of GABA and the LIM homeobox gene Lhx6, a transcription factor expressed by GABAergic interneurons generated in the ventral telencephalon, some of which migrate into the cortex during development. These observations suggest that mr-gef encodes a protein that is part of a signaling pathway involved in telencephalic neurogenesis; particularly in the development of GABAergic interneurons.
Resumo:
Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors: the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main findings are that the Heterogenous Autoregressive model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts
Resumo:
The mixing of floes of different thickness caused by repeated deformation of the ice cover is modeled as diffusion, and the mass balance equation for sea ice accounting for mass diffusion is developed. The effect of deformational diffusion on the ice thickness balance is shown to reach 1% of the divergence effect, which describes ridging and lead formation. This means that with the same accuracy the mass balance equation can be written in terms of mean velocity rather than mean mass-weighted velocity, which one should correctly use for a multicomponent fluid such as sea ice with components identified by floe thickness. Mixing (diffusion) of sea ice also occurs because of turbulent variations in wind and ocean drags that are unresolved in models. Estimates of the importance of turbulent mass diffusion on the dynamic redistribution of ice thickness are determined using empirical data for the turbulent diffusivity. For long-time-scale prediction (≫5 days), where unresolved atmospheric motion may have a length scale on the order of the Arctic basin and the time scale is larger than the synoptic time scale of atmospheric events, turbulent mass diffusion can exceed 10% of the divergence effect. However, for short-time-scale prediction, for example, 5 days, the unresolved scales are on the order of 100 km, and turbulent diffusion is about 0.1% of the divergence effect. Because inertial effects are small in the dynamics of the sea ice pack, diffusive momentum transfer can be disregarded.
Resumo:
This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
This paper proposes and implements a new methodology for forecasting time series, based on bicorrelations and cross-bicorrelations. It is shown that the forecasting technique arises as a natural extension of, and as a complement to, existing univariate and multivariate non-linearity tests. The formulations are essentially modified autoregressive or vector autoregressive models respectively, which can be estimated using ordinary least squares. The techniques are applied to a set of high-frequency exchange rate returns, and their out-of-sample forecasting performance is compared to that of other time series models