967 resultados para C68 - Computable General Equilibrium Models
Resumo:
How does openness affect economic development? This question is answered in the context of a dynamic general equilibrium model of the world economy, where countries have technological differences that are both sector-neutral and specific to the investment goods sector. Relative to a benchmark case of trade in credit markets only, consider (i) a complete restriction of trade, and (ii) a full liberalization of trade. The first change decreases the cross-sectional dispersion of incomes only slightly, and produces a relatively small welfare loss. The second change, instead, decreases dispersion by a significant amount, and produces a very large welfare gain.
Resumo:
This paper constructs and estimates a sticky-price, Dynamic Stochastic General Equilibrium model with heterogenous production sectors. Sectors differ in price stickiness, capital-adjustment costs and production technology, and use output from each other as material and investment inputs following an Input-Output Matrix and Capital Flow Table that represent the U.S. economy. By relaxing the standard assumption of symmetry, this model allows different sectoral dynamics in response to monetary policy shocks. The model is estimated by Simulated Method of Moments using sectoral and aggregate U.S. time series. Results indicate 1) substantial heterogeneity in price stickiness across sectors, with quantitatively larger differences between services and goods than previously found in micro studies that focus on final goods alone, 2) a strong sensitivity to monetary policy shocks on the part of construction and durable manufacturing, and 3) similar quantitative predictions at the aggregate level by the multi-sector model and a standard model that assumes symmetry across sectors.
Resumo:
The aim of this paper is to demonstrate that, even if Marx's solution to the transformation problem can be modified, his basic conclusions remain valid. the proposed alternative solution which is presented hare is based on the constraint of a common general profit rate in both spaces and a money wage level which will be determined simultaneously with prices.
Resumo:
Dans les milieux contaminés par les métaux, les organismes vivants sont exposés à plusieurs d’entre eux en même temps. Les modèles courants de prédiction des effets biologiques des métaux sur les organismes (p. ex., modèle du ligand biotique, BLM ; modèle de l’ion libre, FIAM), sont des modèles d’équilibre chimique qui prévoient, en présence d'un deuxième métal, une diminution de la bioaccumulation du métal d’intérêt et par la suite une atténuation de ses effets. Les biomarqueurs de toxicité, tels que les phytochélatines (PCs), ont été utilisés comme étant un moyen alternatif pour l’évaluation des effets biologiques. Les phytochélatines sont des polypeptides riches en cystéine dont la structure générale est (γ-glu-cys)n-Gly où n varie de 2 à 11. Leur synthèse semble dépendante de la concentration des ions métalliques ainsi que de la durée de l’ exposition de l’organisme, aux métaux. L'objectif de cette étude était donc de déterminer, dans les mélanges binaires de métaux, la possibilité de prédiction de la synthèse des phytochélatines par les modèles d’équilibres chimiques, tel que le BLM. Pour cela, la quantité de phytochélatines produites en réponse d’une exposition aux mélanges binaires : Cd-Ca, Cd-Cu et Cd-Pb a été mesurée tout en surveillant l’effet direct de la compétition par le biais des concentrations de métaux internalisés. En effet, après six heures d’exposition, la bioaccumulation de Cd diminue en présence du Ca et de très fortes concentrations de Pb et de Cu (de l’ordre de 5×10-6 M). Par contre, avec des concentrations modérées de ces deux métaux, le Cd augmente en présence de Cu et ne semble pas affecté par la présence de Pb. Dans le cas de la compétition Cd-Cu, une bonne corrélation a été observée entre la production de PC2, PC3 et PC4 et la quantité des métaux bioaccumulés. Pour la synthèse des phytochélatines et la bioaccumulation, les effets étaient considérés comme synergiques. Dans le cas du Cd-Ca, les quantités de PC3 et PC4 ont diminué avec le métal internalisé (effet antagoniste), mais ce qui était remarquable était la grande quantité de cystéine (GSH) et PC2 qui ont été produites à de fortes concentrations du Ca. Le Pb seul n’a pas induit les PCs. Par conséquent, il n’y avait pas de variation de la quantité de PCs avec la concentration de Pb à laquelle les algues ont été exposées. La détection et la quantification des PCs ont été faites par chromatographie à haute performance couplée d’un détecteur de fluorescence (HPLC-FL). Tandis que les concentrations métalliques intracellulaires ont été analysées par spectroscopie d’absorption atomique (AAS) ou par spectrométrie de masse à source plasma à couplage inductif (ICP-MS).
Resumo:
Dans cette thèse, je me suis intéressé aux effets des fluctuations du prix de pétrole sur l'activité macroéconomique selon la cause sous-jacente ces fluctuations. Les modèles économiques utilisés dans cette thèse sont principalement les modèles d'équilibre général dynamique stochastique (de l'anglais Dynamic Stochastic General Equilibrium, DSGE) et les modèles Vecteurs Autorégressifs, VAR. Plusieurs études ont examiné les effets des fluctuations du prix de pétrole sur les principaux variables macroéconomiques, mais très peu d'entre elles ont fait spécifiquement le lien entre les effets des fluctuations du prix du pétrole et la l'origine de ces fluctuations. Pourtant, il est largement admis dans les études plus récentes que les augmentations du prix du pétrole peuvent avoir des effets très différents en fonction de la cause sous-jacente de cette augmentation. Ma thèse, structurée en trois chapitres, porte une attention particulière aux sources de fluctuations du prix de pétrole et leurs impacts sur l'activité macroéconomique en général, et en particulier sur l'économie du Canada. Le premier chapitre examine comment les chocs d'offre de pétrole, de demande agrégée, et de demande de précaution de pétrole affectent l'économie du Canada, dans un Modèle d'équilibre Général Dynamique Stochastique estimé. L'estimation est réalisée par la méthode Bayésienne, en utilisant des données trimestrielles canadiennes sur la période 1983Q1 à 2010Q4. Les résultats montrent que les effets dynamiques des fluctuations du prix du pétrole sur les principaux agrégats macro-économiques canadiens varient en fonction de leurs sources. En particulier, une augmentation de 10% du prix réel du pétrole causée par des chocs positifs sur la demande globale étrangère a un effet positif significatif de l'ordre de 0,4% sur le PIB réel du Canada au moment de l'impact et l'effet reste positif sur tous les horizons. En revanche, une augmentation du prix réel du pétrole causée par des chocs négatifs sur l'offre de pétrole ou par des chocs positifs de la demande de pétrole de précaution a un effet négligeable sur le PIB réel du Canada au moment de l'impact, mais provoque une baisse légèrement significative après l'impact. En outre, parmi les chocs pétroliers identifiés, les chocs sur la demande globale étrangère ont été relativement plus important pour expliquer la fluctuation des principaux agrégats macroéconomiques du Canada au cours de la période d'estimation. Le deuxième chapitre utilise un modèle Structurel VAR en Panel pour examiner les liens entre les chocs de demande et d'offre de pétrole et les ajustements de la demande de travail et des salaires dans les industries manufacturières au Canada. Le modèle est estimé sur des données annuelles désagrégées au niveau industriel sur la période de 1975 à 2008. Les principaux résultats suggèrent qu'un choc positif de demande globale a un effet positif sur la demande de travail et les salaires, à court terme et à long terme. Un choc négatif sur l'offre de pétrole a un effet négatif relativement faible au moment de l'impact, mais l'effet devient positif après la première année. En revanche, un choc positif sur la demande précaution de pétrole a un impact négatif à tous les horizons. Les estimations industrie-par-industrie confirment les précédents résultats en panel. En outre, le papier examine comment les effets des différents chocs pétroliers sur la demande travail et les salaires varient en fonction du degré d'exposition commerciale et de l'intensité en énergie dans la production. Il ressort que les industries fortement exposées au commerce international et les industries fortement intensives en énergie sont plus vulnérables aux fluctuations du prix du pétrole causées par des chocs d'offre de pétrole ou des chocs de demande globale. Le dernier chapitre examine les implications en terme de bien-être social de l'introduction des inventaires en pétrole sur le marché mondial à l'aide d'un modèle DSGE de trois pays dont deux pays importateurs de pétrole et un pays exportateur de pétrole. Les gains de bien-être sont mesurés par la variation compensatoire de la consommation sous deux règles de politique monétaire. Les principaux résultats montrent que l'introduction des inventaires en pétrole a des effets négatifs sur le bien-être des consommateurs dans chacun des deux pays importateurs de pétrole, alors qu'il a des effets positifs sur le bien-être des consommateurs dans le pays exportateur de pétrole, quelle que soit la règle de politique monétaire. Par ailleurs, l'inclusion de la dépréciation du taux de change dans les règles de politique monétaire permet de réduire les coûts sociaux pour les pays importateurs de pétrole. Enfin, l'ampleur des effets de bien-être dépend du niveau d'inventaire en pétrole à l'état stationnaire et est principalement expliquée par les chocs sur les inventaires en pétrole.
Resumo:
Graphical techniques for modeling the dependencies of randomvariables have been explored in a variety of different areas includingstatistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics.Formalisms for manipulating these models have been developedrelatively independently in these research communities. In this paper weexplore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independencenetworks (PINs). The paper contains a self-contained review of the basic principles of PINs.It is shown that the well-known forward-backward (F-B) and Viterbialgorithms for HMMs are special cases of more general inference algorithms forarbitrary PINs. Furthermore, the existence of inference and estimationalgorithms for more general graphical models provides a set of analysistools for HMM practitioners who wish to explore a richer class of HMMstructures.Examples of relatively complex models to handle sensorfusion and coarticulationin speech recognitionare introduced and treated within the graphical model framework toillustrate the advantages of the general approach.
Resumo:
Esta disertación busca estudiar los mecanismos de transmisión que vinculan el comportamiento de agentes y firmas con las asimetrías presentes en los ciclos económicos. Para lograr esto, se construyeron tres modelos DSGE. El en primer capítulo, el supuesto de función cuadrática simétrica de ajuste de la inversión fue removido, y el modelo canónico RBC fue reformulado suponiendo que des-invertir es más costoso que invertir una unidad de capital físico. En el segundo capítulo, la contribución más importante de esta disertación es presentada: la construcción de una función de utilidad general que anida aversión a la pérdida, aversión al riesgo y formación de hábitos, por medio de una función de transición suave. La razón para hacerlo así es el hecho de que los individuos son aversos a la pérdidad en recesiones, y son aversos al riesgo en auges. En el tercer capítulo, las asimetrías en los ciclos económicos son analizadas junto con ajuste asimétrico en precios y salarios en un contexto neokeynesiano, con el fin de encontrar una explicación teórica de la bien documentada asimetría presente en la Curva de Phillips.
Resumo:
Se presenta aquí, en forma breve, el origen de la matematización económica y el campo de la economía matemática. Un enfoque histórico inicial divide dicho campo en un primer periodo denominado marginalista, otro donde se utiliza la teoría de los conjuntos y modelos lineales y por último un periodo que integra los dos anteriores. Posteriormente, se analiza la evolución de la Teoría del Equilibrio General desde Quesnay, pasando por Walras y desarrollos posteriores hasta su culminación con los trabajos de Arrow, Debreu y sus contemporáneos. Finalmente, se describe la influencia de las matemáticas, en especial de la optimización dinámica, en la teoría macroeconómica y a otras áreas de la economía.
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
Current global atmospheric models fail to simulate well organised tropical phenomena in which convection interacts with dynamics and physics. A new methodology to identify convectively coupled equatorial waves, developed by NCAS-Climate, has been applied to output from the two latest models of the Met Office/Hadley Centre which have fundamental differences in dynamical formulation. Variability, horizontal and vertical structures, and propagation characteristics of tropical convection and equatorial waves, along with their coupled behaviour in the models are examined and evaluated against a previous comprehensive study of observations. It is shown that, in general, the models perform well for equatorial waves coupled with off-equatorial convection. However they perform poorly for waves coupled with equatorial convection. The vertical structure of the simulated wave is not conducive to energy conversion/growth and does not support the correct physical-dynamical coupling that occurs in the real world. The following figure shows an example of the Kelvin wave coupled with equatorial convection. It shows that the models fail to simulate a key feature of convectively coupled Kelvin wave in observations, namely near surface anomalous equatorial zonal winds together with intensified equatorial convection and westerly winds in phase with the convection. The models are also not able to capture the observed vertical tilt structure and the vertical propagation of the Kelvin wave into the lower stratosphere as well as the secondary peak in the mid-troposphere, particularly in HadAM3. These results can be used to provide a test-bed for experimentation to improve the coupling of physics and dynamics in climate and weather models.
Resumo:
A time series of the observed transport through an array of moorings across the Mozambique Channel is compared with that of six model runs with ocean general circulation models. In the observations, the seasonal cycle cannot be distinguished from red noise, while this cycle is dominant in the transport of the numerical models. It is found, however, that the seasonal cycles of the observations and numerical models are similar in strength and phase. These cycles have an amplitude of 5 Sv and a maximum in September, and can be explained by the yearly variation of the wind forcing. The seasonal cycle in the models is dominant because the spectral density at other frequencies is underrepresented. Main deviations from the observations are found at depths shallower than 1500 m and in the 5/y–6/y frequency range. Nevertheless, the structure of eddies in the models is close to the observed eddy structure. The discrepancy is found to be related to the formation mechanism and the formation position of the eddies. In the observations, eddies are frequently formed from an overshooting current near the mooring section, as proposed by Ridderinkhof and de Ruijter (2003) and Harlander et al. (2009). This causes an alternation of events at the mooring section, varying between a strong southward current, and the formation and passing of an eddy. This results in a large variation of transport in the frequency range of 5/y–6/y. In the models, the eddies are formed further north and propagate through the section. No alternation similar to the observations is observed, resulting in a more constant transport.
Resumo:
Measurements of anthropogenic tracers such as chlorofluorocarbons and tritium must be quantitatively combined with ocean general circulation models as a component of systematic model development. The authors have developed and tested an inverse method, using a Green's function, to constrain general circulation models with transient tracer data. Using this method chlorofluorocarbon-11 and -12 (CFC-11 and -12) observations are combined with a North Atlantic configuration of the Miami Isopycnic Coordinate Ocean Model with 4/3 degrees resolution. Systematic differences can be seen between the observed CFC concentrations and prior CFC fields simulated by the model. These differences are reduced by the inversion, which determines the optimal gas transfer across the air-sea interface, accounting for uncertainties in the tracer observations. After including the effects of unresolved variability in the CFC fields, the model is found to be inconsistent with the observations because the model/data misfit slightly exceeds the error estimates. By excluding observations in waters ventilated north of the Greenland-Scotland ridge (sigma (0) < 27.82 kg m(-3); shallower than about 2000 m), the fit is improved, indicating that the Nordic overflows are poorly represented in the model. Some systematic differences in the model/data residuals remain and are related, in part, to excessively deep model ventilation near Rockall and deficient ventilation in the main thermocline of the eastern subtropical gyre. Nevertheless, there do not appear to be gross errors in the basin-scale model circulation. Analysis of the CFC inventory using the constrained model suggests that the North Atlantic Ocean shallower than about 2000 m was near 20% saturated in the mid-1990s. Overall, this basin is a sink to 22% of the total atmosphere-to-ocean CFC-11 flux-twice the global average value. The average water mass formation rates over the CFC transient are 7.0 and 6.0 Sv (Sv = 10(6) m(3) s(-1)) for subtropical mode water and subpolar mode water, respectively.
Resumo:
Three simple climate models (SCMs) are calibrated using simulations from atmosphere ocean general circulation models (AOGCMs). In addition to using two conventional SCMs, results from a third simpler model developed specifically for this study are obtained. An easy to implement and comprehensive iterative procedure is applied that optimises the SCM emulation of global-mean surface temperature and total ocean heat content, and, if available in the SCM, of surface temperature over land, over the ocean and in both hemispheres, and of the global-mean ocean temperature profile. The method gives best-fit estimates as well as uncertainty intervals for the different SCM parameters. For the calibration, AOGCM simulations with two different types of forcing scenarios are used: pulse forcing simulations performed with 2 AOGCMs and gradually changing forcing simulations from 15 AOGCMs obtained within the framework of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. The method is found to work well. For all possible combinations of SCMs and AOGCMs the emulation of AOGCM results could be improved. The obtained SCM parameters depend both on the AOGCM data and the type of forcing scenario. SCMs with a poor representation of the atmosphere thermal inertia are better able to emulate AOGCM results from gradually changing forcing than from pulse forcing simulations. Correct simultaneous emulation of both atmospheric temperatures and the ocean temperature profile by the SCMs strongly depends on the representation of the temperature gradient between the atmosphere and the mixed layer. Introducing climate sensitivities that are dependent on the forcing mechanism in the SCMs allows the emulation of AOGCM responses to carbon dioxide and solar insolation forcings equally well. Also, some SCM parameters are found to be very insensitive to the fitting, and the reduction of their uncertainty through the fitting procedure is only marginal, while other parameters change considerably. The very simple SCM is found to reproduce the AOGCM results as well as the other two comparably more sophisticated SCMs.
Resumo:
This paper seeks to illustrate the point that physical inconsistencies between thermodynamics and dynamics usually introduce nonconservative production/destruction terms in the local total energy balance equation in numerical ocean general circulation models (OGCMs). Such terms potentially give rise to undesirable forces and/or diabatic terms in the momentum and thermodynamic equations, respectively, which could explain some of the observed errors in simulated ocean currents and water masses. In this paper, a theoretical framework is developed to provide a practical method to determine such nonconservative terms, which is illustrated in the context of a relatively simple form of the hydrostatic Boussinesq primitive equation used in early versions of OGCMs, for which at least four main potential sources of energy nonconservation are identified; they arise from: (1) the “hanging” kinetic energy dissipation term; (2) assuming potential or conservative temperature to be a conservative quantity; (3) the interaction of the Boussinesq approximation with the parameterizations of turbulent mixing of temperature and salinity; (4) some adiabatic compressibility effects due to the Boussinesq approximation. In practice, OGCMs also possess spurious numerical energy sources and sinks, but they are not explicitly addressed here. Apart from (1), the identified nonconservative energy sources/sinks are not sign definite, allowing for possible widespread cancellation when integrated globally. Locally, however, these terms may be of the same order of magnitude as actual energy conversion terms thought to occur in the oceans. Although the actual impact of these nonconservative energy terms on the overall accuracy and physical realism of the oceans is difficult to ascertain, an important issue is whether they could impact on transient simulations, and on the transition toward different circulation regimes associated with a significant reorganization of the different energy reservoirs. Some possible solutions for improvement are examined. It is thus found that the term (2) can be substantially reduced by at least one order of magnitude by using conservative temperature instead of potential temperature. Using the anelastic approximation, however, which was initially thought as a possible way to greatly improve the accuracy of the energy budget, would only marginally reduce the term (4) with no impact on the terms (1), (2) and (3).
Resumo:
Several studies using ocean–atmosphere general circulation models (GCMs) suggest that the atmospheric component plays a dominant role in the modelled El Niño-Southern Oscillation (ENSO). To help elucidate these findings, the two main atmosphere feedbacks relevant to ENSO, the Bjerknes positive feedback (μ) and the heat flux negative feedback (α), are here analysed in nine AMIP runs of the CMIP3 multimodel dataset. We find that these models generally have improved feedbacks compared to the coupled runs which were analysed in part I of this study. The Bjerknes feedback, μ, is increased in most AMIP runs compared to the coupled run counterparts, and exhibits both positive and negative biases with respect to ERA40. As in the coupled runs, the shortwave and latent heat flux feedbacks are the two dominant components of α in the AMIP runs. We investigate the mechanisms behind these two important feedbacks, in particular focusing on the strong 1997–1998 El Niño. Biases in the shortwave flux feedback, α SW, are the main source of model uncertainty in α. Most models do not successfully represent the negative αSW in the East Pacific, primarily due to an overly strong low-cloud positive feedback in the far eastern Pacific. Biases in the cloud response to dynamical changes dominate the modelled α SW biases, though errors in the large-scale circulation response to sea surface temperature (SST) forcing also play a role. Analysis of the cloud radiative forcing in the East Pacific reveals model biases in low cloud amount and optical thickness which may affect α SW. We further show that the negative latent heat flux feedback, α LH, exhibits less diversity than α SW and is primarily driven by variations in the near-surface specific humidity difference. However, biases in both the near-surface wind speed and humidity response to SST forcing can explain the inter-model αLH differences.