91 resultados para STATIONARY SPACETIMES
Resumo:
The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents, which combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solutionof the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. The model includes not only productivity shocks, but also shocks to redistributive taxation, which cause substantial short-run variation in the cross-sectional distribution of wealth. If those shocks are operative, it is shown that a solution method based on very few statistics of the distribution is not suitable, while the proposed method can solve the model with high accuracy, at least for the case of small aggregate shocks. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible.Matlab programs to solve the model can be downloaded.
Resumo:
A number of existing studies have concluded that risk sharing allocations supported by competitive, incomplete markets equilibria are quantitatively close to first-best. Equilibrium asset prices in these models have been difficult to distinguish from those associated with a complete markets model, the counterfactual features of which have been widely documented. This paper asks if life cycle considerations, in conjunction with persistent idiosyncratic shocks which become more volatile during aggregate downturns, can reconcile the quantitative properties of the competitive asset pricing framework with those of observed asset returns. We begin by arguing that data from the Panel Study on Income Dynamics support the plausibility of such a shock process. Our estimates suggest a high degree of persistence as well as a substantial increase in idiosyncratic conditional volatility coincident with periods of low growth in U.S. GNP. When these factors are incorporated in a stationary overlapping generations framework, the implications for the returns on risky assets are substantial. Plausible parameterizations of our economy are able to generate Sharpe ratios which match those observed in U.S. data. Our economy cannot, however, account for the level of variability of stock returns, owing in large part to the specification of its production technology.
Resumo:
Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the non-stationarity. In particular, it is well-known that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: non-stationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very well-behaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
The evolution of boundedly rational rules for playing normal form games is studied within stationary environments ofstochastically changing games. Rules are viewed as algorithms prescribing strategies for the different normal formgames that arise. It is shown that many of the folk results of evolutionary game theory typically obtained witha fixed game and fixed strategies carry over to the present case. The results are also related to recent experimentson rules and games.
Resumo:
The present paper revisits a property embedded in most dynamic macroeconomic models: the stationarity of hours worked. First, I argue that, contrary to what is often believed, there are many reasons why hours could be nonstationary in those models, while preserving the property of balanced growth. Second, I show that the postwar evidence for most industrialized economies is clearly at odds with the assumption of stationary hours per capita. Third, I examine the implications of that evidence for the role of technology as a source of economic fluctuations in the G7 countries.
Resumo:
We incorporate the process of enforcement learning by assuming that the agency's current marginal cost is a decreasing function of its past experience of detecting and convicting. The agency accumulates data and information (on criminals, on opportunities of crime) enhancing the ability to apprehend in the future at a lower marginal cost.We focus on the impact of enforcement learning on optimal stationary compliance rules. In particular, we show that the optimal stationary fine could be less-than-maximal and the optimal stationary probability of detection could be higher-than-otherwise.
Resumo:
In earlier work, the present authors have shown that hardness profiles are less dependent on the level of calculation than energy profiles for potential energy surfaces (PESs) having pathological behaviors. At variance with energy profiles, hardness profiles always show the correct number of stationary points. This characteristic has been used to indicate the existence of spurious stationary points on the PESs. In the present work, we apply this methodology to the hydrogen fluoride dimer, a classical difficult case for the density functional theory methods
Resumo:
The 10 June 2000 event was the largest flash flood event that occurred in the Northeast of Spain in the late 20th century, both as regards its meteorological features and its considerable social impact. This paper focuses on analysis of the structures that produced the heavy rainfalls, especially from the point of view of meteorological radar. Due to the fact that this case is a good example of a Mediterranean flash flood event, a final objective of this paper is to undertake a description of the evolution of the rainfall structure that would be sufficiently clear to be understood at an interdisciplinary forum. Then, it could be useful not only to improve conceptual meteorological models, but also for application in downscaling models. The main precipitation structure was a Mesoscale Convective System (MCS) that crossed the region and that developed as a consequence of the merging of two previous squall lines. The paper analyses the main meteorological features that led to the development and triggering of the heavy rainfalls, with special emphasis on the features of this MCS, its life cycle and its dynamic features. To this end, 2-D and 3-D algorithms were applied to the imagery recorded over the complete life cycle of the structures, which lasted approximately 18 h. Mesoscale and synoptic information were also considered. Results show that it was an NS-MCS, quasi-stationary during its stage of maturity as a consequence of the formation of a convective train, the different displacement directions of the 2-D structures and the 3-D structures, including the propagation of new cells, and the slow movement of the convergence line associated with the Mediterranean mesoscale low.
Resumo:
[spa] El objetivo de este trabajo es analizar si los municipios españoles se ajustan en presencia de un shock presupuestario y (si es así) qué elementos del presupuesto son los que realizan el ajuste. La metodología utilizada para contestar estas preguntas es un mecanismo de corrección del error, VECM, que estimamos con un panel de datos de los municipios españoles durante el período 1988-2006. Nuestros resultados confirman que, en primer lugar, los municipios se ajustan en presencia de un shock fiscal (es decir, el déficit es estacionario en el largo plazo). En segundo lugar, obtenemos que cuando el shock afecta a los ingresos el ajuste lo soporta principalmente el municipio reduciendo el gasto, las transferencias tienen un papel muy reducido en este proceso de ajuste. Por el contrario, cuando el shock afecta al gasto, el ajuste es compartido en términos similares entre el municipio – incrementado los impuestos – y los gobiernos de niveles superiores – incrementando las transferencias. Estos resultados sugieren que la viabilidad de las finanzas pública locales es factible con diferentes entornos institucionales.
Resumo:
[spa] El objetivo de este trabajo es analizar si los municipios españoles se ajustan en presencia de un shock presupuestario y (si es así) qué elementos del presupuesto son los que realizan el ajuste. La metodología utilizada para contestar estas preguntas es un mecanismo de corrección del error, VECM, que estimamos con un panel de datos de los municipios españoles durante el período 1988-2006. Nuestros resultados confirman que, en primer lugar, los municipios se ajustan en presencia de un shock fiscal (es decir, el déficit es estacionario en el largo plazo). En segundo lugar, obtenemos que cuando el shock afecta a los ingresos el ajuste lo soporta principalmente el municipio reduciendo el gasto, las transferencias tienen un papel muy reducido en este proceso de ajuste. Por el contrario, cuando el shock afecta al gasto, el ajuste es compartido en términos similares entre el municipio – incrementado los impuestos – y los gobiernos de niveles superiores – incrementando las transferencias. Estos resultados sugieren que la viabilidad de las finanzas pública locales es factible con diferentes entornos institucionales.
Resumo:
Exact formulas for the effective eigenvalue characterizing the initial decay of intensity correlation functions are given in terms of stationary moments of the intensity. Spontaneous emission noise and nonwhite pump noise are considered. Our results are discussed in connection with earlier calculations, simulations, and experimental results for single-mode dye lasers, two-mode inhomogeneously broadened lasers, and two-mode dye ring lasers. The effective eigenvalue is seen to depend sensitively on noise characteristics and symmetry properties of the system. In particular, the effective eigenvalue associated with cross correlations of two-mode lasers is seen to vanish in the absence of pump noise as a consequence of detailed balance. In the presence of pump noise, the vanishing of this eigenvalue requires equal pump parameters for the two modes and statistical independence of spontaneous emission noise acting on each mode.
Resumo:
We study the problem of the advection of passive particles with inertia in a two-dimensional, synthetic, and stationary turbulent flow. The asymptotic analytical result and numerical simulations show the importance of inertial bias in collecting the particles preferentially in certain regions of the flow, depending on their density relative to that of the flow. We also study how these aggregates are affected when a simple chemical reaction mechanism is introduced through a Eulerian scheme. We find that inertia can be responsible for maintaining a stationary concentration pattern even under nonfavorable reactive conditions or destroying it under favorable ones.
Resumo:
The propagation of an initially planar front is studied within the framework of the photosensitive Belousov-Zhabotinsky reaction modulated by a smooth spatial variation of the local front velocity in the direction perpendicular to front propagation. Under this modulation, the wave front develops several fingers corresponding to the local maxima of the modulation function. After a transient, the wave front achieves a stationary shape that does not necessarily coincide with the one externally imposed by the modulation. Theoretical predictions for the selection criteria of fingers and steady-state velocity are experimentally validated.
Resumo:
We report on an experimental study of long normal Saffman-Taylor fingers subject to periodic forcing. The sides of the finger develop a low amplitude, long wavelength instability. We discuss the finger response in stationary and nonstationary situations, as well as the dynamics towards the stationary states. The response frequency of the instability increases with forcing frequency at low forcing frequencies, while, remarkably, it becomes independent of forcing frequency at large forcing frequencies. This implies a process of wavelength selection. These observations are in good agreement with previous numerical results reported in [Ledesma-Aguilar et al., Phys. Rev. E 71, 016312 (2005)]. We also study the average value of the finger width, and its fluctuations, as a function of forcing frequency. The average finger width is always smaller than the width of the steady-state finger. Fluctuations have a nonmonotonic behavior with a maximum at a particular frequency.