967 resultados para C68 - Computable General Equilibrium Models
Resumo:
We analyse the ability of CMIP3 and CMIP5 coupled ocean–atmosphere general circulation models (CGCMs) to simulate the tropical Pacific mean state and El Niño-Southern Oscillation (ENSO). The CMIP5 multi-model ensemble displays an encouraging 30 % reduction of the pervasive cold bias in the western Pacific, but no quantum leap in ENSO performance compared to CMIP3. CMIP3 and CMIP5 can thus be considered as one large ensemble (CMIP3 + CMIP5) for multi-model ENSO analysis. The too large diversity in CMIP3 ENSO amplitude is however reduced by a factor of two in CMIP5 and the ENSO life cycle (location of surface temperature anomalies, seasonal phase locking) is modestly improved. Other fundamental ENSO characteristics such as central Pacific precipitation anomalies however remain poorly represented. The sea surface temperature (SST)-latent heat flux feedback is slightly improved in the CMIP5 ensemble but the wind-SST feedback is still underestimated by 20–50 % and the shortwave-SST feedbacks remain underestimated by a factor of two. The improvement in ENSO amplitudes might therefore result from error compensations. The ability of CMIP models to simulate the SST-shortwave feedback, a major source of erroneous ENSO in CGCMs, is further detailed. In observations, this feedback is strongly nonlinear because the real atmosphere switches from subsident (positive feedback) to convective (negative feedback) regimes under the effect of seasonal and interannual variations. Only one-third of CMIP3 + CMIP5 models reproduce this regime shift, with the other models remaining locked in one of the two regimes. The modelled shortwave feedback nonlinearity increases with ENSO amplitude and the amplitude of this feedback in the spring strongly relates with the models ability to simulate ENSO phase locking. In a final stage, a subset of metrics is proposed in order to synthesize the ability of each CMIP3 and CMIP5 models to simulate ENSO main characteristics and key atmospheric feedbacks.
Resumo:
We investigate how sea surface temperatures (SSTs) around Antarctica respond to the Southern An- nular Mode (SAM) on multiple timescales. To that end we examine the relationship between SAM and SST within unperturbed preindustrial control simulations of coupled general circulation models (GCMs) included in the Climate Modeling Intercomparison Project phase 5 (CMIP5). We develop a technique to extract the re- sponse of the Southern Ocean SST (55◦S−70◦S) to a hypothetical step increase in the SAM index. We demonstrate that in many GCMs, the expected SST step re- sponse function is nonmonotonic in time. Following a shift to a positive SAM anomaly, an initial cooling regime can transition into surface warming around Antarctica. However, there are large differences across the CMIP5 ensemble. In some models the step response function never changes sign and cooling persists, while in other GCMs the SST anomaly crosses over from negative to positive values only three years after a step increase in the SAM. This intermodel diversity can be related to differences in the models’ climatological thermal ocean stratification in the region of seasonal sea ice around Antarctica. Exploiting this relationship, we use obser- vational data for the time-mean meridional and vertical temperature gradients to constrain the real Southern Ocean response to SAM on fast and slow timescales.
Resumo:
For the first time, we introduce a class of transformed symmetric models to extend the Box and Cox models to more general symmetric models. The new class of models includes all symmetric continuous distributions with a possible non-linear structure for the mean and enables the fitting of a wide range of models to several data types. The proposed methods offer more flexible alternatives to Box-Cox or other existing procedures. We derive a very simple iterative process for fitting these models by maximum likelihood, whereas a direct unconditional maximization would be more difficult. We give simple formulae to estimate the parameter that indexes the transformation of the response variable and the moments of the original dependent variable which generalize previous published results. We discuss inference on the model parameters. The usefulness of the new class of models is illustrated in one application to a real dataset.
Resumo:
We combine general equilibrium theory and théorie générale of stochastic processes to derive structural results about equilibrium state prices.
Resumo:
Economic theory deals with a complex reality, which may be seen through various perspectives, using different methods. Economics’ three major branches – development economics, macroeconomics, and microeconomics – cannot be unified because the former two use preferentially a historical-deductive, while the later, an essentially hypothetical-deductive or aprioristic method. Smith, Marx and Keynes used an essentially the method of the new historical facts, while Walras, an aprioristic one to devise the neoclassical general equilibrium model. The historical-deductive method looks for the new historical facts that condition the economic reality. Economic theory remains central, but it is more modest, or less general, as the economist that adopt principally this method is content to analyze stabilization and growth in the framework of a given historical phase or moment of the economic process. As a trade off, his models are more realistic and conducive to more effective economic policies, as long as he is not required to previously abandon, one by one, the unrealistic assumptions required by a excessively general theory, but already starts from more realistic ones
Resumo:
This paper analyses the equilibrium structure of protection in Mercosul, developing empirical analyses based on the literature ensuing from the sequence of models set forth by Grossman and Helpman since 1994. Not only Mercosul’s common external tariff (CET) may be explained under a political economy perspective, but the existence of deviations, both at the level of the external tariffs and at that of the internal ones, make it interesting to contrast several structures under this approach. Different general equilibrium frameworks, in which governments are concerned with campaign contributions and with the welfare of the average voter, while organized special-interest groups care only about the welfare of their members, are used as the theoretical basis of the empirical tests. We build a single equation for explaining the CET and two fourequations systems (one equation for each member) for explaining deviations from the CET and from the internal free trade between members. The results (at the two-digit level) shed an interesting light on the sectoral dynamics of protection in each country; notably, Brazil seems to fit in better in the model framework, followed by Uruguay. In the case of the CET, and of deviations from it, the interaction between the domestic lobbies in the four countries plays a major role. There is also suggestion that the lobby structure that bid for deviations, be they internal or external, differs from the one which bid for the CET.
Resumo:
Neste trabalho, serão apresentados artigos que abordam modelos de equilíbrio geral e parcial que tratem da acumulação de capital humano, sistemas de previdência, oferta de trabalho e aposentadoria. São feitas análises entre diferentes mecanismos de acumulação humano e sua relação com produtividade; relações entre incertezas e hipóteses sobre mercados de crédito; e.a in uência dessas variáveis na aposentadoria das pessoas.
Resumo:
Neste trabalho fazemos um resumo de alguns artigos que tratam de equilíbrio geral dinâmico com custos de default. Focamos no estudo dos modelos de Kehoe e Levine (1993) e de Alvarez e Jermann (2000). Também descrevemos algumas adaptações do modelo de Alvarez e Jermann, como os trabalhos de Hellwig e Lorenzoni (2009) e de Azariadis e Kaas (2008), e comparamos os resultados desses modelos com os de Huggett (1993), no qual os mercados são exogenamente incompletos. Finalmente, expomos uma falha no algoritmo computacional sugerido por Krueger e Perry (2010) para se computar os equilíbrios estacionários de economias como as de Alvarez e Jermann (2000).
Resumo:
Esta tese é composta por três ensaios sobre o mercado de crédito e as instituições que regem bancarrota corporativa. No capítulo um, trazemos evidências que questionam a ideia de que maiores níveis de proteção ao credor sempre promovem desenvolvimento do mercado de crédito. Desde a publicação dos artigos seminais de La Porta et al (1997,1998), a métrica de proteção ao credor que os autores propuseram -- o índice de proteção ao credor -- tem sido amplamente utilizada na literatura de Law and Finance como variável explicativa em modelos de regressão linear em forma reduzida para determinar a correlação entre proteção ao credor e desenvolvimento do mercado de crédito. Neste artigo, exploramos alguns problemas com essa abordagem. Do ponto de vista teórico, essa abordagem geralmente supõe uma relação monotônica entre proteção ao credor e expansão do crédito. Nós apresentamos um modelo teórico para um mercado de crédito com seleção adversa em que um nível intermediário de proteção ao credor é capaz de implementar equilíbrios first best. Este resultado está de acordo com diversos outros artigos teóricos, tanto em equilíbrio geral quanto em equilíbrio parcial. Do ponto de vista empírico, tiramos proveito das reformas realizadas por alguns países durante as décadas de 1990 e 2000 para implementar uma estratégia inspirada na literatura de treatment effects e estimar o efeito sobre o valor de mercado e sobre a dívida de: i) permitir automatic stay a firmas em recuperação; e ii) conceder aos credores o direito de afastar os administradores. Os resultados que obtivemos apontam para um impacto positivo de automatic stay sobre todas as variáveis que dependem do valor de mercado da firma. Não encontramos efeito sobre dívida, e não encontramos efeitos significativos do direito de afastar administradores sobre valor de mercado ou dívida. O capítulo dois avalia as consequências empíricas de uma reforma na lei de falências sobre um mercado de crédito pouco desenvolvido. No início de 2005, o Congresso Nacional brasileiro aprovou uma nova lei de falências, a lei 11.101/05. Usando dados de firmas brasileiras e não-brasileiras, nós estimamos, usando dois modelos diferentes, o efeito da reforma falimentar sobre variáveis contratuais e não-contratuais de dívida. Ambos os modelos produzem resultados similares. Encontramos um aumento no volume total de dívida e na dívida de longo prazo, e uma redução no custo de dívida. Não encontramos efeitos significativos sobre a estrutura de propriedade da dívida. No capítulo três, desenvolvemos um modelo estimável de equilíbrio em search direcionado aplicado ao mercado de crédito, modelo este que pode ser usado para realizar avaliações ex ante de mudanças institucionais que afetem o crédito (como reformas em leis de falência). A literatura em economia há muito reconhece uma relação causal entre instituições (como leis e regulações) e desenvolvimento dos mercados financeiros. Essa conclusão qualitativa é amplamente reconhecida, mas há pouca evidência de sua importância quantitativa. Com o nosso modelo, é possível estimar como contratos de dívida mudam em resposta a mudanças nos parâmetros que descrevem as instituições da economia. Também é possível estimar o impacto sobre investimentos realizados pelas firmas, bem como caracterizar a distribuição do tamanho, idade e produtividade das firmas antes e depois da mudança institucional. Como ilustração, realizamos um exercício empírico em que usamos dados de firmas brasileiras para simular o impacto de variações na taxa de recuperação de créditos sobre os valores médios e totais de dívida e capital das firmas. Encontramos dívida crescente e capital quase sempre também crescente na taxa de recuperação.
Resumo:
This paper discusses a series of issues related to the use and different possible applications of CGE modelling in trade negotiations. The points addressed range from practical to methodological questions: when to use the models, what they provide the users and how far the model structure and assumptions should be explained to them, the complementary roles of partial and general equilibrium modelling, areas to be improved and data questions. The relevance of the modeller as the final decision maker in all these instances is also highlighted.
Resumo:
In a general equilibrium model. we show that the value of the equilibrium real exchange rate is affected by its own volatility. Risk averse exporters. that make their exporting decision before observing the realization of the real exchange rate. choose to export less the more volatile is the real exchange rate. Therefore the trude balance and the variance of the real exchange rate are negatively related. An increase in the volatility of the real exchange rate for instance deteriorates the trade balance and to restore equilibrium a real exchange rate depreciation has to take place. In the empirical part of the paper we use the traditional (unconditional) standard deviation of RER changes as our measure of RER volatility.We describe the behavior of the RER volatility for Brazil,Argentina and Mexico.Monthly data for the three countries are used. and also daily data for Bruzil. Interesting patterns of volatility could be associated to the nature of the several stabilization plans adopted in those countries and to changes in the exchange rate regimes .
Resumo:
The inability of rational expectation models with money supply rules to deliver inflation persistence following a transitory deviation of money growth from trend is due to the rapid adjustment of the price level to expected events. The observation of persistent inflation in macroeconomic data leads many economists to believe that prices adjust sluggishly and/or expectations must not be rational. Inflation persistence in U.S. data can be characterized by a vector autocorrelation function relating inflation and deviations of output from trend. In the vector autocorrelation function both inflation and output are highly persistent and there are significant positive dynamic cross-correlations relating inflation and output. This paper shows that a flexible-price general equilibrium business cycle model with money and a central bank using a Taylor rule can account for these patterns. There are no sticky prices and no liquidity effects. Agents decisions in a period are taken only after all shocks are observed. The monetary policy rule transforms output persistence into inflation persistence and creates positive cross-correlations between inflation and output.
Resumo:
The mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of this relatively large amount of information, we have compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but the models underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data-assimilation method based on a particle filter. In one simulation, all the 50 proxy-based records are used while in the other two only the continental or oceanic proxy-based records constrain the model results. As expected, data assimilation leads to improving the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at midlatitude that warms up northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxy-based paleoclimate records whose reconstructed signal is either incompatible with the signal recorded by some other proxy-based records or with model physics.
Resumo:
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.
Resumo:
High-resolution, ground-based and independent observations including co-located wind radiometer, lidar stations, and infrasound instruments are used to evaluate the accuracy of general circulation models and data-constrained assimilation systems in the middle atmosphere at northern hemisphere midlatitudes. Systematic comparisons between observations, the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses including the recent Integrated Forecast System cycles 38r1 and 38r2, the NASA’s Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalyses, and the free-running climate Max Planck Institute–Earth System Model–Low Resolution (MPI-ESM-LR) are carried out in both temporal and spectral dom ains. We find that ECMWF and MERRA are broadly consistent with lidar and wind radiometer measurements up to ~40 km. For both temperature and horizontal wind components, deviations increase with altitude as the assimilated observations become sparser. Between 40 and 60 km altitude, the standard deviation of the mean difference exceeds 5 K for the temperature and 20 m/s for the zonal wind. The largest deviations are observed in winter when the variability from large-scale planetary waves dominates. Between lidar data and MPI-ESM-LR, there is an overall agreement in spectral amplitude down to 15–20 days. At shorter time scales, the variability is lacking in the model by ~10 dB. Infrasound observations indicate a general good agreement with ECWMF wind and temperature products. As such, this study demonstrates the potential of the infrastructure of the Atmospheric Dynamics Research Infrastructure in Europe project that integrates various measurements and provides a quantitative understanding of stratosphere-troposphere dynamical coupling for numerical weather prediction applications.