955 resultados para Dynamic general equilibrium
Resumo:
This paper examines a dynamic game of exploitation of a common pool of some renewable asset by agents that sell the result of their exploitation on an oligopolistic market. A Markov Perfect Nash Equilibrium of the game is used to analyze the effects of a merger of a subset of the agents. We study the impact of the merger on the equilibrium production strategies, on the steady states, and on the profitability of the merger for its members. We show that there exists an interval of the asset's stock such that any merger is profitable if the stock at the time the merger is formed falls within that interval. That includes mergers that are known to be unprofitable in the corresponding static equilibrium framework.
Resumo:
Many finite elements used in structural analysis possess deficiencies like shear locking, incompressibility locking, poor stress predictions within the element domain, violent stress oscillation, poor convergence etc. An approach that can probably overcome many of these problems would be to consider elements in which the assumed displacement functions satisfy the equations of stress field equilibrium. In this method, the finite element will not only have nodal equilibrium of forces, but also have inner stress field equilibrium. The displacement interpolation functions inside each individual element are truncated polynomial solutions of differential equations. Such elements are likely to give better solutions than the existing elements.In this thesis, a new family of finite elements in which the assumed displacement function satisfies the differential equations of stress field equilibrium is proposed. A general procedure for constructing the displacement functions and use of these functions in the generation of elemental stiffness matrices has been developed. The approach to develop field equilibrium elements is quite general and various elements to analyse different types of structures can be formulated from corresponding stress field equilibrium equations. Using this procedure, a nine node quadrilateral element SFCNQ for plane stress analysis, a sixteen node solid element SFCSS for three dimensional stress analysis and a four node quadrilateral element SFCFP for plate bending problems have been formulated.For implementing these elements, computer programs based on modular concepts have been developed. Numerical investigations on the performance of these elements have been carried out through standard test problems for validation purpose. Comparisons involving theoretical closed form solutions as well as results obtained with existing finite elements have also been made. It is found that the new elements perform well in all the situations considered. Solutions in all the cases converge correctly to the exact values. In many cases, convergence is faster when compared with other existing finite elements. The behaviour of field consistent elements would definitely generate a great deal of interest amongst the users of the finite elements.
Resumo:
Recently, cumulative residual entropy (CRE) has been found to be a new measure of information that parallels Shannon’s entropy (see Rao et al. [Cumulative residual entropy: A new measure of information, IEEE Trans. Inform. Theory. 50(6) (2004), pp. 1220–1228] and Asadi and Zohrevand [On the dynamic cumulative residual entropy, J. Stat. Plann. Inference 137 (2007), pp. 1931–1941]). Motivated by this finding, in this paper, we introduce a generalized measure of it, namely cumulative residual Renyi’s entropy, and study its properties.We also examine it in relation to some applied problems such as weighted and equilibrium models. Finally, we extend this measure into the bivariate set-up and prove certain characterizing relationships to identify different bivariate lifetime models
Resumo:
The traditional task of a central bank is to preserve price stability and, in doing so, not to impair the real economy more than necessary. To meet this challenge, it is of great relevance whether inflation is only driven by inflation expectations and the current output gap or whether it is, in addition, influenced by past inflation. In the former case, as described by the New Keynesian Phillips curve, the central bank can immediately and simultaneously achieve price stability and equilibrium output, the so-called ‘divine coincidence’ (Blanchard and Galí 2007). In the latter case, the achievement of price stability is costly in terms of output and will be pursued over several periods. Similarly, it is important to distinguish this latter case, which describes ‘intrinsic’ inflation persistence, from that of ‘extrinsic’ inflation persistence, where the sluggishness of inflation is not a ‘structural’ feature of the economy but merely ‘inherited’ from the sluggishness of the other driving forces, inflation expectations and output. ‘Extrinsic’ inflation persistence is usually considered to be the less challenging case, as policy-makers are supposed to fight against the persistence in the driving forces, especially to reduce the stickiness of inflation expectations by a credible monetary policy, in order to reestablish the ‘divine coincidence’. The scope of this dissertation is to contribute to the vast literature and ongoing discussion on inflation persistence: Chapter 1 describes the policy consequences of inflation persistence and summarizes the empirical and theoretical literature. Chapter 2 compares two models of staggered price setting, one with a fixed two-period duration and the other with a stochastic duration of prices. I show that in an economy with a timeless optimizing central bank the model with the two-period alternating price-setting (for most parameter values) leads to more persistent inflation than the model with stochastic price duration. This result amends earlier work by Kiley (2002) who found that the model with stochastic price duration generates more persistent inflation in response to an exogenous monetary shock. Chapter 3 extends the two-period alternating price-setting model to the case of 3- and 4-period price durations. This results in a more complex Phillips curve with a negative impact of past inflation on current inflation. As simulations show, this multi-period Phillips curve generates a too low degree of autocorrelation and too early turnings points of inflation and is outperformed by a simple Hybrid Phillips curve. Chapter 4 starts from the critique of Driscoll and Holden (2003) on the relative real-wage model of Fuhrer and Moore (1995). While taking the critique seriously that Fuhrer and Moore’s model will collapse to a much simpler one without intrinsic inflation persistence if one takes their arguments literally, I extend the model by a term for inequality aversion. This model extension is not only in line with experimental evidence but results in a Hybrid Phillips curve with inflation persistence that is observably equivalent to that presented by Fuhrer and Moore (1995). In chapter 5, I present a model that especially allows to study the relationship between fairness attitudes and time preference (impatience). In the model, two individuals take decisions in two subsequent periods. In period 1, both individuals are endowed with resources and are able to donate a share of their resources to the other individual. In period 2, the two individuals might join in a common production after having bargained on the split of its output. The size of the production output depends on the relative share of resources at the end of period 1 as the human capital of the individuals, which is built by means of their resources, cannot fully be substituted one against each other. Therefore, it might be rational for a well-endowed individual in period 1 to act in a seemingly ‘fair’ manner and to donate own resources to its poorer counterpart. This decision also depends on the individuals’ impatience which is induced by the small but positive probability that production is not possible in period 2. As a general result, the individuals in the model economy are more likely to behave in a ‘fair’ manner, i.e., to donate resources to the other individual, the lower their own impatience and the higher the productivity of the other individual. As the (seemingly) ‘fair’ behavior is modelled as an endogenous outcome and as it is related to the aspect of time preference, the presented framework might help to further integrate behavioral economics and macroeconomics.
Resumo:
Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(lambda) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(lambda) and Q-learning belong.
Resumo:
Introducción: Se ha conocido la necesidad de la monitoria del estado hemodinámico de los pacientes quirúrgicos de forma dinámica, que permita realizar una valoración rápida, menos invasiva y confiable para un diagnóstico acertado y evaluar la respuesta a las conductas tomadas. El delta de pletismografía es una herramienta confiable, no invasiva y dinámica que logra cumplir con las características antes mencionadas y que además puede llegar a tener un papel preponderante en la terapia hídrica dirigida. Metodología: Estudio de correlación, se realizaron evaluaciones sistemáticas de la onda de pletismografía y las variables del paciente desde la inducción anestésica hasta el inicio del procedimiento quirúrgico, se determinó la correlación entre la variabilidad de la onda de pletismografía, el delta de pletismografía y el requerimiento de líquidos intraoperatorios. Se incluyeron pacientes adultos en el rango de 18 a 80 años, que cumplían los criterios de inclusión, programados para cirugía bajo anestesia general en la Fundación Cardioinfantil Instituto de Cardiología, hasta lograr la muestra calculada de 31 pacientes. Siguiendo los principios éticos de la declaración de Helsinki y la normatividad colombiana, este estudio no consideró la realización de ningún tipo de intervención en los pacientes lo que lo cataloga de bajo riesgo. Resultados: El 80.6% presentó variabilidad aumentada, con correlación entre la variabilidad de la onda del pulso, el delta POP y la cantidad de líquidos intraoperatorios (0.245 IC 95%). Disminución del delta POP en T3, sugiriendo respuesta a líquidos, correlación entre uso de vasopresores, analgesia y náuseas y vómito postoperatorio. Conclusión: Existe correlación entre la variabilidad de la onda de pletismografía, el delta de pletismografía y la reposición de líquidos endovenosos en los pacientes ventilados mecánicamente durante anestesia general. Además se encuentra asociación entre uso de vasopresores, analgesia y náuseas y vómito postoperatorio.
Resumo:
La energía eléctrica y los bienes o activos eléctricos (de acuerdo con la definición técnica) con que se lleva a cabo su prestación, goza de una particular regulación y normatividad, explicables por la importancia capital de este servicio público, sumado al diseño institucional traído por la constitución de 1991, lo que la hace especialmente compleja, dinámica y abierta a precisiones. Es ese sentido, se parte de un entendimiento inicial de todos los activos que conforman una red de generación, transmisión y distribución eléctrica, para de esa forma comenzar a esbozar el régimen jurídico de los mismos, dependiendo de su posición dentro de la cadena de suministro eléctrico. Una vez concluido este acercamiento, se abordan los principales problemas previsibles desde una perspectiva puramente académica, como por ejemplo el relacionado con el alcance de los conceptos de la CREG y su valor normativo, la presunta inembargabilidad de los bienes destinados a la prestación de servicios públicos en cabeza de comunidades organizadas y el problema de la propiedad de particulares sobre activos conformantes de la red de suministro eléctrico y la salida normativa a ese conflicto (pues no debe olvidarse que los propietarios de activos de uso general, de acuerdo con la CREG, deben ser prestadores de servicios públicos domiciliarios) de modo tal que se respeten los derechos de propiedad. De cada uno de estos interrogantes surgen soluciones que lejos de zanjar las discusiones al respecto, buscan abrir el debate sobre un tema de tan capital importancia.
Resumo:
In the static field limit, the vibrational hyperpolarizability consists of two contributions due to: (1) the shift in the equilibrium geometry (known as nuclear relaxation), and (2) the change in the shape of the potential energy surface (known as curvature). Simple finite field methods have previously been developed for evaluating these static field contributions and also for determining the effect of nuclear relaxation on dynamic vibrational hyperpolarizabilities in the infinite frequency approximation. In this paper the finite field approach is extended to include, within the infinite frequency approximation, the effect of curvature on the major dynamic nonlinear optical processes
Resumo:
Se ha estudiado la dinámica del fitoplancton en las lagunas costeras de Aiguamolls de l'Empordà. El fitoplancton esta sujeto principalmente al control "bottom-up", la variabilidad hidrológica y la disponibilidad de nutrientes tienen una mayor influencia en la composición y distribución de tamaños del fitoplancton, que el zooplancton. La concentración de materia orgánica disuelta es el factor ambiental más correlacionado con el crecimiento de la biomasa fitoplanctónica. Dada la proximidad entre las lagunas costeras y el mar, donde la ocurrencia de Proliferaciones de Algas Nocivas es cada vez más frecuente, se realizan un inventario general de las especies más abundantes del fitoplancton y se llevan a cabo análisis extensivos de la toxicidad. La mayoría de especies de dinoflagelados encontradas son potencialmente nocivas. Hay pocas especies en común entre el mar y las lagunas, sin embargo, existen especies productoras de PANs características de los ambientes lagunares.
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
A number of recent experiments suggest that, at a given wetting speed, the dynamic contact angle formed by an advancing liquid-gas interface with a solid substrate depends on the flow field and geometry near the moving contact line. In the present work, this effect is investigated in the framework of an earlier developed theory that was based on the fact that dynamic wetting is, by its very name, a process of formation of a new liquid-solid interface (newly “wetted” solid surface) and hence should be considered not as a singular problem but as a particular case from a general class of flows with forming or/and disappearing interfaces. The results demonstrate that, in the flow configuration of curtain coating, where a liquid sheet (“curtain”) impinges onto a moving solid substrate, the actual dynamic contact angle indeed depends not only on the wetting speed and material constants of the contacting media, as in the so-called slip models, but also on the inlet velocity of the curtain, its height, and the angle between the falling curtain and the solid surface. In other words, for the same wetting speed the dynamic contact angle can be varied by manipulating the flow field and geometry near the moving contact line. The obtained results have important experimental implications: given that the dynamic contact angle is determined by the values of the surface tensions at the contact line and hence depends on the distributions of the surface parameters along the interfaces, which can be influenced by the flow field, one can use the overall flow conditions and the contact angle as a macroscopic multiparametric signal-response pair that probes the dynamics of the liquid-solid interface. This approach would allow one to investigate experimentally such properties of the interface as, for example, its equation of state and the rheological properties involved in the interface’s response to an external torque, and would help to measure its parameters, such as the coefficient of sliding friction, the surface-tension relaxation time, and so on.
Resumo:
The commonly held view of the conditions in the North Atlantic at the last glacial maximum, based on the interpretation of proxy records, is of large-scale cooling compared to today, limited deep convection, and extensive sea ice, all associated with a southward displaced and weakened overturning thermohaline circulation (THC) in the North Atlantic. Not all studies support that view; in particular, the "strength of the overturning circulation" is contentious and is a quantity that is difficult to determine even for the present day. Quasi-equilibrium simulations with coupled climate models forced by glacial boundary conditions have produced differing results, as have inferences made from proxy records. Most studies suggest the weaker circulation, some suggest little or no change, and a few suggest a stronger circulation. Here results are presented from a three-dimensional climate model, the Hadley Centre Coupled Model version 3 (HadCM3), of the coupled atmosphere - ocean - sea ice system suggesting, in a qualitative sense, that these diverging views could all have occurred at different times during the last glacial period, with different modes existing at different times. One mode might have been characterized by an active THC associated with moderate temperatures in the North Atlantic and a modest expanse of sea ice. The other mode, perhaps forced by large inputs of meltwater from the continental ice sheets into the northern North Atlantic, might have been characterized by a sluggish THC associated with very cold conditions around the North Atlantic and a large areal cover of sea ice. The authors' model simulation of such a mode, forced by a large input of freshwater, bears several of the characteristics of the Climate: Long-range Investigation, Mapping, and Prediction (CLIMAP) Project's reconstruction of glacial sea surface temperature and sea ice extent.
Resumo:
This paper presents a hybrid control strategy integrating dynamic neural networks and feedback linearization into a predictive control scheme. Feedback linearization is an important nonlinear control technique which transforms a nonlinear system into a linear system using nonlinear transformations and a model of the plant. In this work, empirical models based on dynamic neural networks have been employed. Dynamic neural networks are mathematical structures described by differential equations, which can be trained to approximate general nonlinear systems. A case study based on a mixing process is presented.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.