946 resultados para dynamic stochastic general equilibrium models


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we study the dynamic hedging problem using three different utility specifications: stochastic differential utility, terminal wealth utility, and we propose a particular utility transformation connecting both previous approaches. In all cases, we assume Markovian prices. Stochastic differential utility, SDU, impacts the pure hedging demand ambiguously, but decreases the pure speculative demand, because risk aversion increases. We also show that consumption decision is, in some sense, independent of hedging decision. With terminal wealth utility, we derive a general and compact hedging formula, which nests as special all cases studied in Duffie and Jackson (1990). We then show how to obtain their formulas. With the third approach we find a compact formula for hedging, which makes the second-type utility framework a particular case, and show that the pure hedging demand is not impacted by this specification. In addition, with CRRA- and CARA-type utilities, the risk aversion increases and, consequently the pure speculative demand decreases. If futures price are martingales, then the transformation plays no role in determining the hedging allocation. We also derive the relevant Bellman equation for each case, using semigroup techniques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Asset allocation decisions and value at risk calculations rely strongly on volatility estimates. Volatility measures such as rolling window, EWMA, GARCH and stochastic volatility are used in practice. GARCH and EWMA type models that incorporate the dynamic structure of volatility and are capable of forecasting future behavior of risk should perform better than constant, rolling window volatility models. For the same asset the model that is the ‘best’ according to some criterion can change from period to period. We use the reality check test∗ to verify if one model out-performs others over a class of re-sampled time-series data. The test is based on re-sampling the data using stationary bootstrapping. For each re-sample we check the ‘best’ model according to two criteria and analyze the distribution of the performance statistics. We compare constant volatility, EWMA and GARCH models using a quadratic utility function and a risk management measurement as comparison criteria. No model consistently out-performs the benchmark.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

My dissertation focuses on dynamic aspects of coordination processes such as reversibility of early actions, option to delay decisions, and learning of the environment from the observation of other people’s actions. This study proposes the use of tractable dynamic global games where players privately and passively learn about their actions’ true payoffs and are able to adjust early investment decisions to the arrival of new information to investigate the consequences of the presence of liquidity shocks to the performance of a Tobin tax as a policy intended to foster coordination success (chapter 1), and the adequacy of the use of a Tobin tax in order to reduce an economy’s vulnerability to sudden stops (chapter 2). Then, it analyzes players’ incentive to acquire costly information in a sequential decision setting (chapter 3). In chapter 1, a continuum of foreign agents decide whether to enter or not in an investment project. A fraction λ of them are hit by liquidity restrictions in a second period and are forced to withdraw early investment or precluded from investing in the interim period, depending on the actions they chose in the first period. Players not affected by the liquidity shock are able to revise early decisions. Coordination success is increasing in the aggregate investment and decreasing in the aggregate volume of capital exit. Without liquidity shocks, aggregate investment is (in a pivotal contingency) invariant to frictions like a tax on short term capitals. In this case, a Tobin tax always increases success incidence. In the presence of liquidity shocks, this invariance result no longer holds in equilibrium. A Tobin tax becomes harmful to aggregate investment, which may reduces success incidence if the economy does not benefit enough from avoiding capital reversals. It is shown that the Tobin tax that maximizes the ex-ante probability of successfully coordinated investment is decreasing in the liquidity shock. Chapter 2 studies the effects of a Tobin tax in the same setting of the global game model proposed in chapter 1, with the exception that the liquidity shock is considered stochastic, i.e, there is also aggregate uncertainty about the extension of the liquidity restrictions. It identifies conditions under which, in the unique equilibrium of the model with low probability of liquidity shocks but large dry-ups, a Tobin tax is welfare improving, helping agents to coordinate on the good outcome. The model provides a rationale for a Tobin tax on economies that are prone to sudden stops. The optimal Tobin tax tends to be larger when capital reversals are more harmful and when the fraction of agents hit by liquidity shocks is smaller. Chapter 3 focuses on information acquisition in a sequential decision game with payoff complementar- ity and information externality. When information is cheap relatively to players’ incentive to coordinate actions, only the first player chooses to process information; the second player learns about the true payoff distribution from the observation of the first player’s decision and follows her action. Miscoordination requires that both players privately precess information, which tends to happen when it is expensive and the prior knowledge about the distribution of the payoffs has a large variance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The power system stability analysis is approached taking into explicit account the dynamic performance of generators internal voltages and control devices. The proposed method is not a direct method in the usual sense since conclusion for stability or instability is not exclusively based on energy function considerations but it is automatic since the conclusion is achieved without an analyst intervention. The stability test accounts for the nonconservative nature of the system with control devices such as the automatic voltage regulator (AVR) and automatic generation control (AGC) in contrast with the well-known direct methods. An energy function is derived for the system with machines forth-order model, AVR and AGC and it is used to start the analysis procedure and to point out criticalities. The conclusive analysis itself is made by means of a method based on the definition of a region surrounding the equilibrium point where the system net torque is equilibrium restorative. This region is named positive synchronization region (PSR). Since the definition of the PSR boundaries have no dependence on modelling approximation, the PSR test conduces to reliable results. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose general three-dimensional potentials in rotational and cylindrical parabolic coordinates which are generated by direct products of the SO(2, 1) dynamical group. Then we construct their Green functions algebraically and find their spectra. Particular cases of these potentials which appear in the literature are also briefly discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A self-consistent equilibrium calculation, valid for arbitrary aspect ratio tokamaks, is obtained through a direct variational technique that reduces the equilibrium solution, in general obtained from the 2D Grad-Shafranov equation, to a 1D problem in the radial flux coordinate rho. The plasma current profile is supposed to have contributions of the diamagnetic, Pfirsch-Schluter and the neoclassical ohmic and bootstrap currents. An iterative procedure is introduced into our code until the flux surface averaged toroidal current density (J(T)), converges to within a specified tolerance for a given pressure profile and prescribed boundary conditions. The convergence criterion is applied between the (J(T)) profile used to calculate the equilibrium through the variational procedure and the one that results from the equilibrium and given by the sum of all current components. The ohmic contribution is calculated from the neoclassical conductivity and from the self-consistently determined loop voltage in order to give the prescribed value of the total plasma current. The bootstrap current is estimated through the full matrix Hirshman-Sigmar model with the viscosity coefficients as proposed by Shaing, which are valid in all plasma collisionality regimes and arbitrary aspect ratios. The results of the self-consistent calculation are presented for the low aspect ratio tokamak Experimento Tokamak Esferico. A comparison among different models for the bootstrap current estimate is also performed and their possible Limitations to the self-consistent calculation is analysed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Power-law distributions, i.e. Levy flights have been observed in various economical, biological, and physical systems in high-frequency regime. These distributions can be successfully explained via gradually truncated Levy flight (GTLF). In general, these systems converge to a Gaussian distribution in the low-frequency regime. In the present work, we develop a model for the physical basis for the cut-off length in GTLF and its variation with respect to the time interval between successive observations. We observe that GTLF automatically approach a Gaussian distribution in the low-frequency regime. We applied the present method to analyze time series in some physical and financial systems. The agreement between the experimental results and theoretical curves is excellent. The present method can be applied to analyze time series in a variety of fields, which in turn provide a basis for the development of further microscopic models for the system. © 2000 Elsevier Science B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The medium term hydropower scheduling (MTHS) problem involves an attempt to determine, for each time stage of the planning period, the amount of generation at each hydro plant which will maximize the expected future benefits throughout the planning period, while respecting plant operational constraints. Besides, it is important to emphasize that this decision-making has been done based mainly on inflow earliness knowledge. To perform the forecast of a determinate basin, it is possible to use some intelligent computational approaches. In this paper one considers the Dynamic Programming (DP) with the inflows given by their average values, thus turning the problem into a deterministic one which the solution can be obtained by deterministic DP (DDP). The performance of the DDP technique in the MTHS problem was assessed by simulation using the ensemble prediction models. Features and sensitivities of these models are discussed. © 2012 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper analyses three aspects of the share market operated by the Lima Stock Exchange: (i) the short-term relationship between the pricing, direction and volume of order flows; (ii) the components of the spread and the equilibrium point of the limit order book per share, and (iii) the pricing, order direction and trading volume dynamic resulting from shocks in the same variables when lagged. The econometric results for intraday data from 2012 show that the short-run dynamic of the most and least liquid shares in the General Index of the Lima Stock Exchange is explained by the direction of order flow, whose price impact is temporary in both cases.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Land cover change in the Neotropics represents one of the major drivers of global environmental change. Several models have been proposed to explore future trajectories of land use and cover change, particularly in the Amazon. Despite the remarkable development of these tools, model results are still surrounded by uncertainties. None of the model projections available in the literature plausibly captured the overall trajectory of land use and cover change that has been observed in the Amazon over the last decade. In this context, this study aims to review and analyze the general structure of the land use models that have most recently been used to explore land use change in the Amazon, seeking to investigate methodological factors that could explain the divergence between the observed and projected rates, paying special attention to the land demand calculations. Based on this review, the primary limitations inherent to this type of model and the extent to which these limitations can affect the consistency of the projections will also be analyzed. Finally, we discuss potential drivers that could have influenced the recent dynamic of the land use system in the Amazon and produced the unforeseen land cover change trajectory observed in this period. In a complementary way, the primary challenges of the new generation of land use models for the Amazon are synthesized. (c) 2014 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this work is to develop a non-stoichiometric equilibrium model to study parameter effects in the gasification process of a feedstock in downdraft gasifiers. The non-stoichiometric equilibrium model is also known as the Gibbs free energy minimization method. Four models were developed and tested. First a pure non-stoichiometric equilibrium model called M1 was developed; then the methane content was constrained by correlating experimental data and generating the model M2. A kinetic constraint that determines the apparent gasification rate was considered for model M3 and finally the two aforementioned constraints were implemented together in model M4. Models M2 and M4 showed to be the more accurate among the four developed models with mean RMS (root mean square error) values of 1.25 each.Also the gasification of Brazilian Pinus elliottii in a downdraft gasifier with air as gasification agent was studied. The input parameters considered were: (a) equivalence ratio (0.28-035); (b) moisture content (5-20%); (c) gasification time (30-120 min) and carbon conversion efficiency (80-100%). (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many new viscoelastic materials have been developed recently to help improve noise and vibration levels in mechanical structures for applications in automobile and aeronautical industry. The viscoelastic layer treatment applied to solid metal structures modifies two main properties which are related to the mass distribution and the damping mechanism. The other property controlling the dynamics of a mechanical system is the stiffness that does not change much with the viscoelastic material. The model of such system is usually complex, because the viscoelastic material can exhibit nonlinear behavior, in contrast with the many available tools for linear dynamics. In this work, the dynamic behavior of sandwich beam is modeled by finite element method using different element types which are then compared with experimental results developed in the laboratory for various beams with different viscoelastic layer materials. The finite element model is them updated to help understand the effects in the damping for various natural frequencies and the trade-off between attenuation and the mass add to the structure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In accelerating dark energy models, the estimates of the Hubble constant, Ho, from Sunyaev-Zerdovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Omega(M)), the curvature (Omega(K)) and the equation of state parameter GO. In this article, by using a sample of 25 angular diameter distances of galaxy clusters described by the elliptical beta model obtained through the SZE/X-ray technique, we constrain Ho in the framework of a general ACDM model (arbitrary curvature) and a flat XCDM model with a constant equation of state parameter omega = p(x)/rho(x). In order to avoid the use of priors in the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BA()) and the (MB Shift Parameter signature. By taking into account the statistical and systematic errors of the SZE/X-ray technique we obtain for nonflat ACDM model H-0 = 74(-4.0)(+5.0) km s(-1) Mpc(-1) (1 sigma) whereas for a fiat universe with constant equation of state parameter we find H-0 = 72(-4.0)(+5.5) km s(-1) Mpc(-1)(1 sigma). By assuming that galaxy clusters are described by a spherical beta model these results change to H-0 = 6(-7.0)(+8.0) and H-0 = 59(-6.0)(+9.0) km s(-1) Mpc(-1)(1 sigma), respectively. The results from elliptical description are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates based on the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these three independent phenomena provides an interesting method to constrain the Bubble constant. As an extra bonus, the adoption of the elliptical description is revealed to be a quite realistic assumption. Finally, by comparing these results with a recent determination for a, flat ACDM model using only the SZE/X-ray technique and BAO, we see that the geometry has a very weak influence on H-0 estimates for this combination of data.