952 resultados para Spatial computable general equilibrium model
Resumo:
Usando a abordagem de competitive search, modelo um mercado de trabalho com trabalhadores heterogêneos no qual há um problema de risco moral na relação entre firmas e trabalhadores. Nesse contexto, consigo prever como contratos reagem a mudanças nos parâmetros do mercado (em particular, o risco de produção), assim como a variação da probabilidade dos trabalhadores serem contratados. Minha contribuição principal é ver que, no nível individual, existe uma relação negativa entre risco e incentivos, mas efeitos de equilíbrio geral implicam que essa relação pode ser positiva no nível agregado. Esse resultado ajuda a esclarecer resultados empíricos contraditórios sobre a relação entre risco e incentivos.
Resumo:
Life cycle general equilibrium models with heterogeneous agents have a very hard time reproducing the American wealth distribution. A common assumption made in this literature is that all young adults enter the economy with no initial assets. In this article, we relax this assumption – not supported by the data - and evaluate the ability of an otherwise standard life cycle model to account for the U.S. wealth inequality. The new feature of the model is that agents enter the economy with assets drawn from an initial distribution of assets, which is estimated using a non-parametric method applied to data from the Survey of Consumer Finances. We found that heterogeneity with respect to initial wealth is key for this class of models to replicate the data. According to our results, American inequality can be explained almost entirely by the fact that some individuals are lucky enough to be born into wealth, while others are born with few or no assets.
Resumo:
This thesis contains three chapters. The first chapter uses a general equilibrium framework to simulate and compare the long run effects of the Patient Protection and Affordable Care Act (PPACA) and of health care costs reduction policies on macroeconomic variables, government budget, and welfare of individuals. We found that all policies were able to reduce uninsured population, with the PPACA being more effective than cost reductions. The PPACA increased public deficit mainly due to the Medicaid expansion, forcing tax hikes. On the other hand, cost reductions alleviated the fiscal burden of public insurance, reducing public deficit and taxes. Regarding welfare effects, the PPACA as a whole and cost reductions are welfare improving. High welfare gains would be achieved if the U.S. medical costs followed the same trend of OECD countries. Besides, feasible cost reductions are more welfare improving than most of the PPACA components, proving to be a good alternative. The second chapter documents that life cycle general equilibrium models with heterogeneous agents have a very hard time reproducing the American wealth distribution. A common assumption made in this literature is that all young adults enter the economy with no initial assets. In this chapter, we relax this assumption – not supported by the data – and evaluate the ability of an otherwise standard life cycle model to account for the U.S. wealth inequality. The new feature of the model is that agents enter the economy with assets drawn from an initial distribution of assets. We found that heterogeneity with respect to initial wealth is key for this class of models to replicate the data. According to our results, American inequality can be explained almost entirely by the fact that some individuals are lucky enough to be born into wealth, while others are born with few or no assets. The third chapter documents that a common assumption adopted in life cycle general equilibrium models is that the population is stable at steady state, that is, its relative age distribution becomes constant over time. An open question is whether the demographic assumptions commonly adopted in these models in fact imply that the population becomes stable. In this chapter we prove the existence of a stable population in a demographic environment where both the age-specific mortality rates and the population growth rate are constant over time, the setup commonly adopted in life cycle general equilibrium models. Hence, the stability of the population do not need to be taken as assumption in these models.
Resumo:
Expanded Bed Adsorption (EBA) is an integrative process that combines concepts of chromatography and fluidization of solids. The many parameters involved and their synergistic effects complicate the optimization of the process. Fortunately, some mathematical tools have been developed in order to guide the investigation of the EBA system. In this work the application of experimental design, phenomenological modeling and artificial neural networks (ANN) in understanding chitosanases adsorption on ion exchange resin Streamline® DEAE have been investigated. The strain Paenibacillus ehimensis NRRL B-23118 was used for chitosanase production. EBA experiments were carried out using a column of 2.6 cm inner diameter with 30.0 cm in height that was coupled to a peristaltic pump. At the bottom of the column there was a distributor of glass beads having a height of 3.0 cm. Assays for residence time distribution (RTD) revelead a high degree of mixing, however, the Richardson-Zaki coefficients showed that the column was on the threshold of stability. Isotherm models fitted the adsorption equilibrium data in the presence of lyotropic salts. The results of experiment design indicated that the ionic strength and superficial velocity are important to the recovery and purity of chitosanases. The molecular mass of the two chitosanases were approximately 23 kDa and 52 kDa as estimated by SDS-PAGE. The phenomenological modeling was aimed to describe the operations in batch and column chromatography. The simulations were performed in Microsoft Visual Studio. The kinetic rate constant model set to kinetic curves efficiently under conditions of initial enzyme activity 0.232, 0.142 e 0.079 UA/mL. The simulated breakthrough curves showed some differences with experimental data, especially regarding the slope. Sensitivity tests of the model on the surface velocity, axial dispersion and initial concentration showed agreement with the literature. The neural network was constructed in MATLAB and Neural Network Toolbox. The cross-validation was used to improve the ability of generalization. The parameters of ANN were improved to obtain the settings 6-6 (enzyme activity) and 9-6 (total protein), as well as tansig transfer function and Levenberg-Marquardt training algorithm. The neural Carlos Eduardo de Araújo Padilha dezembro/2013 9 networks simulations, including all the steps of cycle, showed good agreement with experimental data, with a correlation coefficient of approximately 0.974. The effects of input variables on profiles of the stages of loading, washing and elution were consistent with the literature
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Incluye Bibliografía
Resumo:
Includes bibliography
Resumo:
The objective of this work is to develop a non-stoichiometric equilibrium model to study parameter effects in the gasification process of a feedstock in downdraft gasifiers. The non-stoichiometric equilibrium model is also known as the Gibbs free energy minimization method. Four models were developed and tested. First a pure non-stoichiometric equilibrium model called M1 was developed; then the methane content was constrained by correlating experimental data and generating the model M2. A kinetic constraint that determines the apparent gasification rate was considered for model M3 and finally the two aforementioned constraints were implemented together in model M4. Models M2 and M4 showed to be the more accurate among the four developed models with mean RMS (root mean square error) values of 1.25 each.Also the gasification of Brazilian Pinus elliottii in a downdraft gasifier with air as gasification agent was studied. The input parameters considered were: (a) equivalence ratio (0.28-035); (b) moisture content (5-20%); (c) gasification time (30-120 min) and carbon conversion efficiency (80-100%). (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Resumo:
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.
Resumo:
This work investigates the eproducibility of precipitation simulated with an atmospheric general circulation model (AGCM) forced by subtropical South Atlantic sea surface temperature (SST) anomalies. This represents an important test of the model prior to investigating the impact of SSTs on regional climate. A five-member ensemble run was performed using the National Center for Atmospheric Research (NCAR) Community Climate Model, version 3 (CCM3). The CCM3 was forced by observed monthly SST over the South Atlantic from 20 to 60 S. The SST dataset used is from the Hadley Centre covering the period of September 1949-October 2001; this covers more than 50 yr of simulation. A statistical technique is used to determine the reproducibility in the CCM3 runs and to assess potential predictability in precipitation. Empirical orthogonal function analysis is used to reconstruct the ensemble using the most reproducible forced modes in order to separate the atmospheric response to local SST forcing from its internal variability. Results for reproducibility show a seasonal dependence, with higher values during austral autumn and spring. The spatial distribution of reproducibility shows that the tropical atmosphere is dominated by the underlying SSTs while variations in the subtropical-extratropical regions are primarily driven by internal variability. As such, changes in the South Atlantic convergence zone (SACZ) region are mainly dominated by internal atmospheric variability while the ITCZ has greater external dependence, making it more predictable. The reproducibility distribution reveals increased values after the reconstruction of the ensemble.
Resumo:
Planetary waves are key to large-scale dynamical adjustment in the global ocean as they transfer energy from the east to the west side of oceanic basins; they connect the forcing in the ocean interior with the variability at its boundaries: and they change the local heat content, thus coupling oceanic, atmospheric, and biological processes. Planetary waves, mostly of the first baroclinic mode, are observed as distinctive patterns in global time series of sea surface height anomaly (SSHA) and heat storage. The goal of this study is to compare and validate large-scale SSHA signals from coupled ocean-atmosphere general circulation Model for Interdisciplinary Research on Climate (MIROC) with TOPEX/POSEIDON satellite altimeter observations. The last decade of the models` time series is selected for comparison with the altimeter data. The wave patterns are separated from the meso- and large-scale SSHA signals by digital filters calibrated to select the same spectral bands in both model and altimeter data. The band-wise comparison allows for an assessment of the model skill to simulate the dynamical components of the observed wave field. Comparisons regarding both the seasonal cycle and the Rossby wave Held differ significantly among basins. When carried within the same basin, differences can occur between equal latitudes in opposite hemispheres. Furthermore, at some latitudes the MIROC reproduces biannual, annual and semiannual planetary waves with phase speeds and average amplitudes similar to those observed by the altimeter, but with significant differences in phase. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The expansion of sugarcane growing in Brazil, spurred particularly by increased demand for ethanol, has triggered the need to evaluate the economic, social, and environmental impacts of this process, both on the country as a whole and on the growing regions. Even though the balance of costs and benefits is positive from an overall standpoint, this may not be so in specific producing regions, due to negative externalities. The objective of this paper is to estimate the effect of growing sugarcane on the human development index (HDI) and its sub-indices in cane producing regions. In the literature on matching effects, this is interpreted as the effect of the treatment on the treated. Location effects are controlled by spatial econometric techniques, giving rise to the spatial propensity score matching model. The authors analyze 424 minimum comparable areas (MCAs) in the treatment group, compared with 907 MCAs in the control group. The results suggest that the presence of sugarcane growing in these areas is not relevant to determine their social conditions, whether for better or worse. It is thus likely that public policies, especially those focused directly on improving education, health, and income generation/distribution, have much more noticeable effects on the municipal HDI.
Resumo:
In accelerating dark energy models, the estimates of the Hubble constant, Ho, from Sunyaev-Zerdovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Omega(M)), the curvature (Omega(K)) and the equation of state parameter GO. In this article, by using a sample of 25 angular diameter distances of galaxy clusters described by the elliptical beta model obtained through the SZE/X-ray technique, we constrain Ho in the framework of a general ACDM model (arbitrary curvature) and a flat XCDM model with a constant equation of state parameter omega = p(x)/rho(x). In order to avoid the use of priors in the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BA()) and the (MB Shift Parameter signature. By taking into account the statistical and systematic errors of the SZE/X-ray technique we obtain for nonflat ACDM model H-0 = 74(-4.0)(+5.0) km s(-1) Mpc(-1) (1 sigma) whereas for a fiat universe with constant equation of state parameter we find H-0 = 72(-4.0)(+5.5) km s(-1) Mpc(-1)(1 sigma). By assuming that galaxy clusters are described by a spherical beta model these results change to H-0 = 6(-7.0)(+8.0) and H-0 = 59(-6.0)(+9.0) km s(-1) Mpc(-1)(1 sigma), respectively. The results from elliptical description are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates based on the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these three independent phenomena provides an interesting method to constrain the Bubble constant. As an extra bonus, the adoption of the elliptical description is revealed to be a quite realistic assumption. Finally, by comparing these results with a recent determination for a, flat ACDM model using only the SZE/X-ray technique and BAO, we see that the geometry has a very weak influence on H-0 estimates for this combination of data.
Resumo:
In this thesis, the field of study related to the stability analysis of fluid saturated porous media is investigated. In particular the contribution of the viscous heating to the onset of convective instability in the flow through ducts is analysed. In order to evaluate the contribution of the viscous dissipation, different geometries, different models describing the balance equations and different boundary conditions are used. Moreover, the local thermal non-equilibrium model is used to study the evolution of the temperature differences between the fluid and the solid matrix in a thermal boundary layer problem. On studying the onset of instability, different techniques for eigenvalue problems has been used. Analytical solutions, asymptotic analyses and numerical solutions by means of original and commercial codes are carried out.