967 resultados para C68 - Computable General Equilibrium Models
Resumo:
Small-scale dynamic stochastic general equilibrium have been treated as the benchmark of much of the monetary policy literature, given their ability to explain the impact of monetary policy on output, inflation and financial markets. One cause of the empirical failure of New Keynesian models is partially due to the Rational Expectations (RE) paradigm, which entails a tight structure on the dynamics of the system. Under this hypothesis, the agents are assumed to know the data genereting process. In this paper, we propose the econometric analysis of New Keynesian DSGE models under an alternative expectations generating paradigm, which can be regarded as an intermediate position between rational expectations and learning, nameley an adapted version of the "Quasi-Rational" Expectatations (QRE) hypothesis. Given the agents' statistical model, we build a pseudo-structural form from the baseline system of Euler equations, imposing that the length of the reduced form is the same as in the `best' statistical model.
Resumo:
This dissertation consists of three self-contained papers that are related to two main topics. In particular, the first and third studies focus on labor market modeling, whereas the second essay presents a dynamic international trade setup.rnrnIn Chapter "Expenses on Labor Market Reforms during Transitional Dynamics", we investigate the arising costs of a potential labor market reform from a government point of view. To analyze various effects of unemployment benefits system changes, this chapter develops a dynamic model with heterogeneous employed and unemployed workers.rn rnIn Chapter "Endogenous Markup Distributions", we study how markup distributions adjust when a closed economy opens up. In order to perform this analysis, we first present a closed-economy general-equilibrium industry dynamics model, where firms enter and exit markets, and then extend our analysis to the open-economy case.rn rnIn Chapter "Unemployment in the OECD - Pure Chance or Institutions?", we examine effects of aggregate shocks on the distribution of the unemployment rates in OECD member countries.rn rnIn all three chapters we model systems that behave randomly and operate on stochastic processes. We therefore exploit stochastic calculus that establishes clear methodological links between the chapters.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
This paper explores the interaction between upstream firms and downstream firms in a two-region general equilibrium model. In many countries, lower tariff rates are set for intermediate manufactured goods and higher tariff rates are set for final manufactured goods. The derived results imply that such settings of tariff rates tend to preserve a symmetric spread of upstream and downstream firms, and continuing tariff reduction may cause core-periphery structures. In the case in which the circular causality between upstream and downstream firms is focused as agglomeration forces, the present model is fully solved. Thus, we find that (1) the present model displays, at most, three interior steady states, (2) when the asymmetric steady-states exist, they are unstable and (3) location displays hysteresis when the transport costs of intermediate manufactured goods are sufficiently high.
Resumo:
Quantifying mass and energy exchanges within tropical forests is essential for understanding their role in the global carbon budget and how they will respond to perturbations in climate. This study reviews ecosystem process models designed to predict the growth and productivity of temperate and tropical forest ecosystems. Temperate forest models were included because of the minimal number of tropical forest models. The review provides a multiscale assessment enabling potential users to select a model suited to the scale and type of information they require in tropical forests. Process models are reviewed in relation to their input and output parameters, minimum spatial and temporal units of operation, maximum spatial extent and time period of application for each organization level of modelling. Organizational levels included leaf-tree, plot-stand, regional and ecosystem levels, with model complexity decreasing as the time-step and spatial extent of model operation increases. All ecosystem models are simplified versions of reality and are typically aspatial. Remotely sensed data sets and derived products may be used to initialize, drive and validate ecosystem process models. At the simplest level, remotely sensed data are used to delimit location, extent and changes over time of vegetation communities. At a more advanced level, remotely sensed data products have been used to estimate key structural and biophysical properties associated with ecosystem processes in tropical and temperate forests. Combining ecological models and image data enables the development of carbon accounting systems that will contribute to understanding greenhouse gas budgets at biome and global scales.
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Five kinetic models for adsorption of hydrocarbons on activated carbon are compared and investigated in this study. These models assume different mass transfer mechanisms within the porous carbon particle. They are: (a) dual pore and surface diffusion (MSD), (b) macropore, surface, and micropore diffusion (MSMD), (c) macropore, surface and finite mass exchange (FK), (d) finite mass exchange (LK), and (e) macropore, micropore diffusion (BM) models. These models are discriminated using the single component kinetic data of ethane and propane as well as the multicomponent kinetics data of their binary mixtures measured on two commercial activated carbon samples (Ajax and Norit) under various conditions. The adsorption energetic heterogeneity is considered for all models to account for the system. It is found that, in general, the models assuming diffusion flux of adsorbed phase along the particle scale give better description of the kinetic data.
Resumo:
This paper studies the effects of the diffusion of a General Purpose Technology (GPT) that spreads first within the developed North country of its origin, and then to a developing South country. In the developed general equilibrium growth model, each final good can be produced by one of two technologies. Each technology is characterized by a specific labor complemented by a specific set of intermediate goods, which are enhanced periodically by Schumpeterian R&D activities. When quality reaches a threshold level, a GPT arises in one of the technologies and spreads first to the other technology within the North. Then, it propagates to the South, following a similar sequence. Since diffusion is not even, neither intra- nor inter-country, the GPT produces successive changes in the direction of technological knowledge and in inter- and intra-country wage inequality. Through this mechanism the different observed paths of wage inequality can be accommodated.
Resumo:
This paper presents a general equilibrium model of money demand where the velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant as in standard cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). The model provides an explanation of why, for a sample of 79 countries, the correlation between the velocity of money and the inflation rate appears to be low, unlike common wisdom would suggest. The reason is the diverse transaction technologies available in different economies.
Resumo:
Actual tax systems do not follow the normative recommendations of yhe theory of optimal taxation. There are two reasons for this. Firstly, the informational difficulties of knowing or estimating all relevant elasticities and parameters. Secondly, the political complexities that would arise if a new tax implementation would depart too much from current systems that are perceived as somewhat egalitarians. Hence an ex-novo overhaul of the tax system might just be non-viable. In contrast, a small marginal tax reform could be politically more palatable to accept and economically more simple to implement. The goal of this paper is to evaluate, as a step previous to any tax reform, the marginal welfare cost of the current tax system in Spain. We do this by using a computational general equilibrium model calibrated to a point-in-time micro database. The simulations results show that the Spanish tax system gives rise to a considerable marginal excess burden. Its order of magnitude is of about 0.50 money units for each additional money unit collected through taxes.
Resumo:
The choice of either the rate of monetary growth or the nominal interest rate as the instrument controlled by monetary authorities has both positive and normative implications for economic performance. We reexamine some of the issues related to the choice of the monetary policy instrument in a dynamic general equilibrium model exhibiting endogenous growth in which a fraction of productive government spending is financed by means of issuing currency. When we evaluate the performance of the two monetary instruments attending to the fluctuations of endogenous variables, we find that the inflation rate is less volatile under nominal interest rate targeting. Concerning the fluctuations of consumption and of the growth rate, both monetary policy instruments lead to statistically equivalent volatilities. Finally, we show that none of these two targeting procedures displays unambiguously higher welfare levels.
Resumo:
This paper investigates the role of variable capacity utilization as a source of asymmetries in the relationship between monetary policy and economic activity within a dynamic stochastic general equilibrium framework. The source of the asymmetry is directly linked to the bottlenecks and stock-outs that emerge from the existence of capacity constraints in the real side of the economy. Money has real effects due to the presence of rigidities in households' portfolio decisions in the form of a Luces-Fuerst 'limited participation' constraint. The model features variable capacity utilization rates across firms due to demand uncertainty. A monopolistic competitive structure provides additional effects through optimal mark-up changes. The overall message of this paper for monetary policy is that the same actions may have different effects depending on the capacity utilization rate of the economy.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
This paper develops a comprehensive framework for the quantitative analysis of the private and fiscal returns to schooling and of the effect of public policies on private incentives to invest in education. This framework is applied to 14 member states of the European Union. For each of these countries, we construct estimates of the private return to an additional year of schooling for an individual of average attainment, taking into account the effects of education on wages and employment probabilities after allowing for academic failure rates, the direct and opportunity costs of schooling, and the impact of personal taxes, social security contributions and unemployment and pension benefits on net incomes. We also construct a set of effective tax and subsidy rates that measure the effects of different public policies on the private returns to education, and measures of the fiscal returns to schooling that capture the long-term effects of a marginal increase in attainment on public finances under conditions that approximate general equilibrium.
Resumo:
We consider a general equilibrium model a la Bhaskar (Review of Economic Studies 2002): there are complementarities across sectors, each of which comprise (many) heterogenous monopolistically competitive firms. Bhaskar's model is extended in two directions: production requires capital, and labour markets are segmented. Labour market segmentation models the difficulties of labour migrating across international barriers (in a trade context) or from a poor region to a richer one (in a regional context), whilst the assumption of a single capital market means that capital flows freely between countries or regions. The model is solved analytically and a closed form solution is provided. Adding labour market segmentation to Bhaskar's two-tier industrial structure allows us to study, inter alia, the impact of competition regulations on wages and - financial flows both in the regional and international context, and the output, welfare and financial implications of relaxing immigration laws. The analytical approach adopted allows us, not only to sign the effect of policies, but also to quantify their effects. Introducing capital as a factor of production improves the realism of the model and refi nes its empirically testable implications.