926 resultados para C30 - General-Sectional Models
Resumo:
We present a new radiation scheme for the Oxford Planetary Unified Model System for Venus, suitable for the solar and thermal bands. This new and fast radiative parameterization uses a different approach in the two main radiative wavelength bands: solar radiation (0.1-5.5 mu m) and thermal radiation (1.7-260 mu m). The solar radiation calculation is based on the delta-Eddington approximation (two-stream-type) with an adding layer method. For the thermal radiation case, a code based on an absorptivity/emissivity formulation is used. The new radiative transfer formulation implemented is intended to be computationally light, to allow its incorporation in 3D global circulation models, but still allowing for the calculation of the effect of atmospheric conditions on radiative fluxes. This will allow us to investigate the dynamical-radiative-microphysical feedbacks. The model flexibility can be also used to explore the uncertainties in the Venus atmosphere such as the optical properties in the deep atmosphere or cloud amount. The results of radiative cooling and heating rates and the global-mean radiative-convective equilibrium temperature profiles for different atmospheric conditions are presented and discussed. This new scheme works in an atmospheric column and can be easily implemented in 3D Venus global circulation models. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents, which combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solutionof the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. The model includes not only productivity shocks, but also shocks to redistributive taxation, which cause substantial short-run variation in the cross-sectional distribution of wealth. If those shocks are operative, it is shown that a solution method based on very few statistics of the distribution is not suitable, while the proposed method can solve the model with high accuracy, at least for the case of small aggregate shocks. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible.Matlab programs to solve the model can be downloaded.
Resumo:
This paper offers some preliminary steps in the marriage of some of the theoretical foundations of new economic geography with spatial computable general equilibrium models. Modelling the spatial economy of Colombia using the traditional assumptions of computable general equilibrium (CGE) models makes little sense when one territorial unit, Bogota, accounts for over one quarter of GDP and where transportation costs are high and accessibility low compared to European or North American standards. Hence, handling market imperfections becomes imperative as does the need to address internal spatial issues from the perspective of Colombia`s increasing involvement with external markets. The paper builds on the Centro de Estudios de Economia Regional (CEER) model, a spatial CGE model of the Colombian economy; non-constant returns and non-iceberg transportation costs are introduced and some simulation exercises carried out. The results confirm the asymmetric impacts that trade liberalization has on a spatial economy in which one region, Bogota, is able to more fully exploit scale economies vis--vis the rest of Colombia. The analysis also reveals the importance of different hypotheses on factor mobility and the role of price effects to better understand the consequences of trade opening in a developing economy.
Resumo:
Quantifying mass and energy exchanges within tropical forests is essential for understanding their role in the global carbon budget and how they will respond to perturbations in climate. This study reviews ecosystem process models designed to predict the growth and productivity of temperate and tropical forest ecosystems. Temperate forest models were included because of the minimal number of tropical forest models. The review provides a multiscale assessment enabling potential users to select a model suited to the scale and type of information they require in tropical forests. Process models are reviewed in relation to their input and output parameters, minimum spatial and temporal units of operation, maximum spatial extent and time period of application for each organization level of modelling. Organizational levels included leaf-tree, plot-stand, regional and ecosystem levels, with model complexity decreasing as the time-step and spatial extent of model operation increases. All ecosystem models are simplified versions of reality and are typically aspatial. Remotely sensed data sets and derived products may be used to initialize, drive and validate ecosystem process models. At the simplest level, remotely sensed data are used to delimit location, extent and changes over time of vegetation communities. At a more advanced level, remotely sensed data products have been used to estimate key structural and biophysical properties associated with ecosystem processes in tropical and temperate forests. Combining ecological models and image data enables the development of carbon accounting systems that will contribute to understanding greenhouse gas budgets at biome and global scales.
Resumo:
Five kinetic models for adsorption of hydrocarbons on activated carbon are compared and investigated in this study. These models assume different mass transfer mechanisms within the porous carbon particle. They are: (a) dual pore and surface diffusion (MSD), (b) macropore, surface, and micropore diffusion (MSMD), (c) macropore, surface and finite mass exchange (FK), (d) finite mass exchange (LK), and (e) macropore, micropore diffusion (BM) models. These models are discriminated using the single component kinetic data of ethane and propane as well as the multicomponent kinetics data of their binary mixtures measured on two commercial activated carbon samples (Ajax and Norit) under various conditions. The adsorption energetic heterogeneity is considered for all models to account for the system. It is found that, in general, the models assuming diffusion flux of adsorbed phase along the particle scale give better description of the kinetic data.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
ABSTRACT : Research in empirical asset pricing has pointed out several anomalies both in the cross section and time series of asset prices, as well as in investors' portfolio choice. This dissertation aims to discover the forces driving some of these "puzzling" asset pricing dynamics and portfolio decisions observed in the financial market. Through the dissertation I construct and study dynamic general equilibrium models of heterogeneous investors in the presence of frictions and evaluate quantitatively their implications for financial-market asset prices and portfolio choice. I also explore the potential roots of puzzles in international finance. Chapter 1 shows that, by introducing jointly endogenous no-default type of borrowing constraints and heterogeneous beliefs in a dynamic general-equilibrium economy, many empirical features of stock return volatility can be reproduced. While most of the research on stock return volatility is empirical, this paper provides a theoretical framework that is able to reproduce simultaneously the cross section and time series stylized facts concerning stock returns and their volatility. In contrast to the existing theoretical literature related to stock return volatility, I don't impose persistence or regimes in any of the exogenous state variables or in preferences. Volatility clustering, asymmetry in the stock return-volatility relationship, and pricing of multi-factor volatility components in the cross section all arise endogenously as a consequence of the feedback between the binding of no-default constraints and heterogeneous beliefs. Chapters 2 and 3 explore the implications of differences of opinion across investors in different countries for international asset pricing anomalies. Chapter 2 demonstrates that several international finance "puzzles" can be reproduced by a single risk factor which captures heterogeneous beliefs across international investors. These puzzles include: (i) home equity preference; (ii) the dependence of firm returns on local and foreign factors; (iii) the co-movement of returns and international capital flows; and (iv) abnormal returns around foreign firm cross-listing events in the local market. These are reproduced in a setup with symmetric information and in a perfectly integrated world with multiple countries and independent processes producing the same good. Chapter 3 shows that by extending this framework to multiple goods and correlated production processes; the "forward premium puzzle" arises naturally as a compensation for the heterogeneous expectations about the depreciation of the exchange rate held by international investors. Chapters 2 and 3 propose differences of opinion across international investors as the potential resolution of several international finance `puzzles'. In a globalized world where both capital and information flow freely across countries, this explanation seems more appealing than existing asymmetric information or segmented markets theories aiming to explain international finance puzzles.
Resumo:
This research provides a description of the process followed in order to assemble a "Social Accounting Matrix" for Spain corresponding to the year 2000 (SAMSP00). As argued in the paper, this process attempts to reconcile ESA95 conventions with requirements of applied general equilibrium modelling. Particularly, problems related to the level of aggregation of net taxation data, and to the valuation system used for expressing the monetary value of input-output transactions have deserved special attention. Since the adoption of ESA95 conventions, input-output transactions have been preferably valued at basic prices, which impose additional difficulties on modellers interested in computing applied general equilibrium models. This paper addresses these difficulties by developing a procedure that allows SAM-builders to change the valuation system of input-output transactions conveniently. In addition, this procedure produces new data related to net taxation information.
Resumo:
This research provides a description of the process followed in order to assemble a "Social Accounting Matrix" for Spain corresponding to the year 2000 (SAMSP00). As argued in the paper, this process attempts to reconcile ESA95 conventions with requirements of applied general equilibrium modelling. Particularly, problems related to the level of aggregation of net taxation data, and to the valuation system used for expressing the monetary value of input-output transactions have deserved special attention. Since the adoption of ESA95 conventions, input-output transactions have been preferably valued at basic prices, which impose additional difficulties on modellers interested in computing applied general equilibrium models. This paper addresses these difficulties by developing a procedure that allows SAM-builders to change the valuation system of input-output transactions conveniently. In addition, this procedure produces new data related to net taxation information.
Resumo:
The most general black M5-brane solution of eleven-dimensional supergravity (with a flat R4 spacetime in the brane and a regular horizon) is characterized by charge, mass and two angular momenta. We use this metric to construct general dual models of large-N QCD (at strong coupling) that depend on two free parameters. The mass spectrum of scalar particles is determined analytically (in the WKB approximation) and numerically in the whole two-dimensional parameter space. We compare the mass spectrum with analogous results from lattice calculations, and find that the supergravity predictions are close to the lattice results everywhere on the two dimensional parameter space except along a special line. We also examine the mass spectrum of the supergravity Kaluza-Klein (KK) modes and find that the KK modes along the compact D-brane coordinate decouple from the spectrum for large angular momenta. There are however KK modes charged under a U(1)×U(1) global symmetry which do not decouple anywhere on the parameter space. General formulas for the string tension and action are also given.
Resumo:
Ingvaldsen et al. comment on our study assessing global fish interchanges between the North Atlantic and Pacific oceans for more than 500 species during the entire 21st century. They propose that discrepancies between our model projections and observed data for cod in the Barents Sea are the result of the choice of Atmosphere-Ocean General Circulation Models (AOGCMs). We address this assertion here, re-running the cod model with additional observation data from the Barents Sea1, 3, and show that the lack of open-access, archived data for the Barents Sea was the primary cause of local prediction mismatch. This finding recalls the importance of systematic deposit of biodiversity data in global databases
Resumo:
In the context of multivariate regression (MLR) and seemingly unrelated regressions (SURE) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. in this paper, we propose finite-and large-sample likelihood-based test procedures for possibly non-linear hypotheses on the coefficients of MLR and SURE systems.
Resumo:
This note investigates the adequacy of the finite-sample approximation provided by the Functional Central Limit Theorem (FCLT) when the errors are allowed to be dependent. We compare the distribution of the scaled partial sums of some data with the distribution of the Wiener process to which it converges. Our setup is purposely very simple in that it considers data generated from an ARMA(1,1) process. Yet, this is sufficient to bring out interesting conclusions about the particular elements which cause the approximations to be inadequate in even quite large sample sizes.
Resumo:
How does openness affect economic development? This question is answered in the context of a dynamic general equilibrium model of the world economy, where countries have technological differences that are both sector-neutral and specific to the investment goods sector. Relative to a benchmark case of trade in credit markets only, consider (i) a complete restriction of trade, and (ii) a full liberalization of trade. The first change decreases the cross-sectional dispersion of incomes only slightly, and produces a relatively small welfare loss. The second change, instead, decreases dispersion by a significant amount, and produces a very large welfare gain.
Resumo:
Cet article illustre l’applicabilité des méthodes de rééchantillonnage dans le cadre des tests multiples (simultanés), pour divers problèmes économétriques. Les hypothèses simultanées sont une conséquence habituelle de la théorie économique, de sorte que le contrôle de la probabilité de rejet de combinaisons de tests est un problème que l’on rencontre fréquemment dans divers contextes économétriques et statistiques. À ce sujet, on sait que le fait d’ignorer le caractère conjoint des hypothèses multiples peut faire en sorte que le niveau de la procédure globale dépasse considérablement le niveau désiré. Alors que la plupart des méthodes d’inférence multiple sont conservatrices en présence de statistiques non-indépendantes, les tests que nous proposons visent à contrôler exactement le niveau de signification. Pour ce faire, nous considérons des critères de test combinés proposés initialement pour des statistiques indépendantes. En appliquant la méthode des tests de Monte Carlo, nous montrons comment ces méthodes de combinaison de tests peuvent s’appliquer à de tels cas, sans recours à des approximations asymptotiques. Après avoir passé en revue les résultats antérieurs sur ce sujet, nous montrons comment une telle méthodologie peut être utilisée pour construire des tests de normalité basés sur plusieurs moments pour les erreurs de modèles de régression linéaires. Pour ce problème, nous proposons une généralisation valide à distance finie du test asymptotique proposé par Kiefer et Salmon (1983) ainsi que des tests combinés suivant les méthodes de Tippett et de Pearson-Fisher. Nous observons empiriquement que les procédures de test corrigées par la méthode des tests de Monte Carlo ne souffrent pas du problème de biais (ou sous-rejet) souvent rapporté dans cette littérature – notamment contre les lois platikurtiques – et permettent des gains sensibles de puissance par rapport aux méthodes combinées usuelles.