29 resultados para dynamic stochastic general equilibrium models
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
We present a new radiation scheme for the Oxford Planetary Unified Model System for Venus, suitable for the solar and thermal bands. This new and fast radiative parameterization uses a different approach in the two main radiative wavelength bands: solar radiation (0.1-5.5 mu m) and thermal radiation (1.7-260 mu m). The solar radiation calculation is based on the delta-Eddington approximation (two-stream-type) with an adding layer method. For the thermal radiation case, a code based on an absorptivity/emissivity formulation is used. The new radiative transfer formulation implemented is intended to be computationally light, to allow its incorporation in 3D global circulation models, but still allowing for the calculation of the effect of atmospheric conditions on radiative fluxes. This will allow us to investigate the dynamical-radiative-microphysical feedbacks. The model flexibility can be also used to explore the uncertainties in the Venus atmosphere such as the optical properties in the deep atmosphere or cloud amount. The results of radiative cooling and heating rates and the global-mean radiative-convective equilibrium temperature profiles for different atmospheric conditions are presented and discussed. This new scheme works in an atmospheric column and can be easily implemented in 3D Venus global circulation models. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
We propose a way to incorporate NTBs for the four workhorse models of the modern trade literature in computable general equilibrium models (CGEs). CGE models feature intermediate linkages and thus allow us to study global value chains (GVCs). We show that the Ethier-Krugman monopolistic competition model, the Melitz firm heterogeneity model and the Eaton and Kortum model can be defined as an Armington model with generalized marginal costs, generalized trade costs and a demand externality. As already known in the literature in both the Ethier-Krugman model and the Melitz model generalized marginal costs are a function of the amount of factor input bundles. In the Melitz model generalized marginal costs are also a function of the price of the factor input bundles. Lower factor prices raise the number of firms that can enter the market profitably (extensive margin), reducing generalized marginal costs of a representative firm. For the same reason the Melitz model features a demand externality: in a larger market more firms can enter. We implement the different models in a CGE setting with multiple sectors, intermediate linkages, non-homothetic preferences and detailed data on trade costs. We find the largest welfare effects from trade cost reductions in the Melitz model. We also employ the Melitz model to mimic changes in Non tariff Barriers (NTBs) with a fixed cost-character by analysing the effect of changes in fixed trade costs. While we work here with a model calibrated to the GTAP database, the methods developed can also be applied to CGE models based on the WIOD database.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Introduction: According to the ecological view, coordination establishes byvirtueof social context. Affordances thought of as situational opportunities to interact are assumed to represent the guiding principles underlying decisions involved in interpersonal coordination. It’s generally agreed that affordances are not an objective part of the (social) environment but that they depend on the constructive perception of involved subjects. Theory and empirical data hold that cognitive operations enabling domain-specific efficacy beliefs are involved in the perception of affordances. The aim of the present study was to test the effects of these cognitive concepts in the subjective construction of local affordances and their influence on decision making in football. Methods: 71 football players (M = 24.3 years, SD = 3.3, 21 % women) from different divisions participated in the study. Participants were presented scenarios of offensive game situations. They were asked to take the perspective of the person on the ball and to indicate where they would pass the ball from within each situation. The participants stated their decisions in two conditions with different game score (1:0 vs. 0:1). The playing fields of all scenarios were then divided into ten zones. For each zone, participants were asked to rate their confidence in being able to pass the ball there (self-efficacy), the likelihood of the group staying in ball possession if the ball were passed into the zone (group-efficacy I), the likelihood of the ball being covered safely by a team member (pass control / group-efficacy II), and whether a pass would establish a better initial position to attack the opponents’ goal (offensive convenience). Answers were reported on visual analog scales ranging from 1 to 10. Data were analyzed specifying general linear models for binomially distributed data (Mplus). Maximum likelihood with non-normality robust standard errors was chosen to estimate parameters. Results: Analyses showed that zone- and domain-specific efficacy beliefs significantly affected passing decisions. Because of collinearity with self-efficacy and group-efficacy I, group-efficacy II was excluded from the models to ease interpretation of the results. Generally, zones with high values in the subjective ratings had a higher probability to be chosen as passing destination (βself-efficacy = 0.133, p < .001, OR = 1.142; βgroup-efficacy I = 0.128, p < .001, OR = 1.137; βoffensive convenience = 0.057, p < .01, OR = 1.059). There were, however, characteristic differences in the two score conditions. While group-efficacy I was the only significant predictor in condition 1 (βgroup-efficacy I = 0.379, p < .001), only self-efficacy and offensive convenience contributed to passing decisions in condition 2 (βself-efficacy = 0.135, p < .01; βoffensive convenience = 0.120, p < .001). Discussion: The results indicate that subjectively distinct attributes projected to playfield zones affect passing decisions. The study proposes a probabilistic alternative to Lewin’s (1951) hodological and deterministic field theory and enables insight into how dimensions of the psychological landscape afford passing behavior. Being part of a team, this psychological landscape is not only constituted by probabilities that refer to the potential and consequences of individual behavior, but also to that of the group system of which individuals are part of. Hence, in regulating action decisions in group settings, informers are extended to aspects referring to the group-level. References: Lewin, K. (1951). In D. Cartwright (Ed.), Field theory in social sciences: Selected theoretical papers by Kurt Lewin. New York: Harper & Brothers.
Resumo:
Potential future changes in tropical cyclone (TC) characteristics are among the more serious regional threats of global climate change. Therefore, a better understanding of how anthropogenic climate change may affect TCs and how these changes translate in socio-economic impacts is required. Here, we apply a TC detection and tracking method that was developed for ERA-40 data to time-slice experiments of two atmospheric general circulation models, namely the fifth version of the European Centre model of Hamburg model (MPI, Hamburg, Germany, T213) and the Japan Meteorological Agency/ Meteorological research Institute model (MRI, Tsukuba city, Japan, TL959). For each model, two climate simulations are available: a control simulation for present-day conditions to evaluate the model against observations, and a scenario simulation to assess future changes. The evaluation of the control simulations shows that the number of intense storms is underestimated due to the model resolution. To overcome this deficiency, simulated cyclone intensities are scaled to the best track data leading to a better representation of the TC intensities. Both models project an increased number of major hurricanes and modified trajectories in their scenario simulations. These changes have an effect on the projected loss potentials. However, these state-of-the-art models still yield contradicting results, and therefore they are not yet suitable to provide robust estimates of losses due to uncertainties in simulated hurricane intensity, location and frequency.
Resumo:
The mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of this relatively large amount of information, we have compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but the models underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data-assimilation method based on a particle filter. In one simulation, all the 50 proxy-based records are used while in the other two only the continental or oceanic proxy-based records constrain the model results. As expected, data assimilation leads to improving the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at midlatitude that warms up northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxy-based paleoclimate records whose reconstructed signal is either incompatible with the signal recorded by some other proxy-based records or with model physics.
Resumo:
An important key for the understanding of the dynamic response to large tropical volcanic eruptions is the warming of the tropical lower stratosphere and the concomitant intensification of the polar vortices. Although this mechanism is reproduced by most general circulation models today, most models still fail in producing an appropriate winter warming pattern in the Northern Hemisphere. In this study ensemble sensitivity experiments were carried out with a coupled atmosphere-ocean model to assess the influence of different ozone climatologies on the atmospheric dynamics and in particular on the northern hemispheric winter warming. The ensemble experiments were perturbed by a single Tambora-like eruption. Larger meridional gradients in the lower stratospheric ozone favor the coupling of zonal wind anomalies between the stratosphere and the troposphere after the eruption. The associated sea level pressure, temperature, and precipitation patterns are more pronounced and the northern hemispheric winter warming is highly significant. Conversely, weaker meridional ozone gradients lead to a weaker response of the winter warming and the associated patterns. The differences in the number of stratosphere-troposphere coupling events between the ensembles experiments indicate a nonlinear response behavior of the dynamics with respect to the ozone and the volcanic forcing.
Resumo:
Comparisons of climate model hindcasts with independent proxy data are essential for assessing model performance in non-analogue situations. However, standardized palaeoclimate data sets for assessing the spatial pattern of past climatic change across continents are lacking for some of the most dynamic episodes of Earth’s recent past. Here we present a new chironomid-based palaeotemperature dataset designed to assess climate model hindcasts of regional summer temperature change in Europe during the late-glacial and early Holocene. Latitudinal and longitudinal patterns of inferred temperature change are in excellent agreement with simulations by the ECHAM-4 model, implying that atmospheric general circulation models like ECHAM-4 can successfully predict regionally diverging temperature trends in Europe, even when conditions differ significantly from present. However, ECHAM-4 infers larger amplitudes of change and higher temperatures during warm phases than our palaeotemperature estimates, suggesting that this and similar models may overestimate past and potentially also future summer temperature changes in Europe.
Resumo:
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.
Resumo:
High-resolution, ground-based and independent observations including co-located wind radiometer, lidar stations, and infrasound instruments are used to evaluate the accuracy of general circulation models and data-constrained assimilation systems in the middle atmosphere at northern hemisphere midlatitudes. Systematic comparisons between observations, the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses including the recent Integrated Forecast System cycles 38r1 and 38r2, the NASA’s Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalyses, and the free-running climate Max Planck Institute–Earth System Model–Low Resolution (MPI-ESM-LR) are carried out in both temporal and spectral dom ains. We find that ECMWF and MERRA are broadly consistent with lidar and wind radiometer measurements up to ~40 km. For both temperature and horizontal wind components, deviations increase with altitude as the assimilated observations become sparser. Between 40 and 60 km altitude, the standard deviation of the mean difference exceeds 5 K for the temperature and 20 m/s for the zonal wind. The largest deviations are observed in winter when the variability from large-scale planetary waves dominates. Between lidar data and MPI-ESM-LR, there is an overall agreement in spectral amplitude down to 15–20 days. At shorter time scales, the variability is lacking in the model by ~10 dB. Infrasound observations indicate a general good agreement with ECWMF wind and temperature products. As such, this study demonstrates the potential of the infrastructure of the Atmospheric Dynamics Research Infrastructure in Europe project that integrates various measurements and provides a quantitative understanding of stratosphere-troposphere dynamical coupling for numerical weather prediction applications.
Resumo:
Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.
Resumo:
The relative abundance of the heavy water isotopologue HDO provides a deeper insight into the atmospheric hydrological cycle. The SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY) allows for global retrievals of the ratio HDO/H2O in the 2.3 micron wavelength range. However, the spectroscopy of water lines in this region remains a large source of uncertainty for these retrievals. We therefore evaluate and improve the water spectroscopy in the range 4174–4300 cm−1 and test if this reduces systematic uncertainties in the SCIAMACHY retrievals of HDO/H2O. We use a laboratory spectrum of water vapour to fit line intensity, air broadening and wavelength shift parameters. The improved spectroscopy is tested on a series of ground-based high resolution FTS spectra as well as on SCIAMACHY retrievals of H2O and the ratio HDO/H2O. We find that the improved spectroscopy leads to lower residuals in the FTS spectra compared to HITRAN 2008 and Jenouvrier et al. (2007) spectroscopy, and the retrievals become more robust against changes in the retrieval window. For both the FTS and SCIAMACHY measurements, the retrieved total H2O columns decrease by 2–4% and we find a negative shift of the HDO/H2O ratio, which for SCIAMACHY is partly compensated by changes in the retrieval setup and calibration software. The updated SCIAMACHY HDO/H2O product shows somewhat steeper latitudinal and temporal gradients and a steeper Rayleigh distillation curve, strengthening previous conclusions that current isotope-enabled general circulation models underestimate the variability in the near-surface HDO/H2O ratio.
Resumo:
Background Young children are known to be the most frequent hospital users compared to older children and young adults. Therefore, they are an important population from economic and policy perspectives of health care delivery. In Switzerland complete hospitalization discharge records for children [<5 years] of four consecutive years [2002–2005] were evaluated in order to analyze variation in patterns of hospital use. Methods Stationary and outpatient hospitalization rates on aggregated ZIP code level were calculated based on census data provided by the Swiss federal statistical office (BfS). Thirty-seven hospital service areas for children [HSAP] were created with the method of "small area analysis", reflecting user-based health markets. Descriptive statistics and general linear models were applied to analyze the data. Results The mean stationary hospitalization rate over four years was 66.1 discharges per 1000 children. Hospitalizations for respiratory problem are most dominant in young children (25.9%) and highest hospitalization rates are associated with geographical factors of urban areas and specific language regions. Statistical models yielded significant effect estimates for these factors and a significant association between ambulatory/outpatient and stationary hospitalization rates. Conclusion The utilization-based approach, using HSAP as spatial representation of user-based health markets, is a valid instrument and allows assessing the supply and demand of children's health care services. The study provides for the first time estimates for several factors associated with the large variation in the utilization and provision of paediatric health care resources in Switzerland.
Resumo:
OBJECTIVE: To assess whether stress further increases hypercoagulation in older individuals. We investigated whether acute stress-induced changes in coagulation parameters differ with age. It is known that hypercoagulation occurs in response to acute stress and that a shift in hemostasis toward a hypercoagulability state occurs with age. However, it is not yet known whether acute stress further increases hypercoagulation in older individuals, and thus may increase their risk for cardiovascular disease (CVD). METHODS: A total of 63 medication-free nonsmoking men, aged between 20 and 65 years (mean +/- standard error of the mean = 36.7 +/- 1.7 years), underwent an acute standardized psychosocial stress task combining public speaking and mental arithmetic in front of an audience. We measured plasma clotting factor VII activity (FVII:C), fibrinogen, and D-dimer at rest, immediately, and 20 minutes after stress. RESULTS: Increased age predicted greater increases in fibrinogen (beta = 0.26, p = 0.041; DeltaR(2) = 0.05), FVII:C (beta = 0.40, p = .006; DeltaR(2) = 0.11), and D-dimer (beta = 0.51, p < .001; DeltaR(2) = 0.18) from rest to 20 minutes after stress independent of body mass index and mean arterial blood pressure. General linear models revealed significant effects of age and stress on fibrinogen, FVII:C, and D-dimer (main effects: p < .04), and greater D-dimer stress reactivity with older age (interaction age-by-stress: F(1.5/90.4) = 4.36, p = .024; f = 0.33). CONCLUSIONS: Our results suggest that acute stress might increase vulnerability in the elderly for hypercoagulability and subsequent hemostasis-associated diseases like CVD.
Resumo:
OBJECTIVE: To investigate the relationship between social support and coagulation parameter reactivity to mental stress in men and to determine if norepinephrine is involved. Lower social support is associated with higher basal coagulation activity and greater norepinephrine stress reactivity, which in turn, is linked with hypercoagulability. However, it is not known if low social support interacts with stress to further increase coagulation reactivity or if norepinephrine affects this association. These findings may be important for determining if low social support influences thrombosis and possible acute coronary events in response to acute stress. We investigated the relationship between social support and coagulation parameter reactivity to mental stress in men and determined if norepinephrine is involved. METHODS: We measured perceived social support in 63 medication-free nonsmoking men (age (mean +/- standard error of the mean) = 36.7 +/- 1.7 years) who underwent an acute standardized psychosocial stress task combining public speaking and mental arithmetic in front of an audience. We measured plasma D-dimer, fibrinogen, clotting Factor VII activity (FVII:C), and plasma norepinephrine at rest as well as immediately after stress and 20 minutes after stress. RESULTS: Independent of body mass index, mean arterial pressure, and age, lower social support was associated with higher D-dimer and fibrinogen levels at baseline (p < .012) and with greater increases in fibrinogen (beta = -0.36, p = .001; DeltaR(2) = .12), and D-dimer (beta = -0.21, p = .017; DeltaR(2) = .04), but not in FVII:C (p = .83) from baseline to 20 minutes after stress. General linear models revealed significant main effects of social support and stress on fibrinogen, D-dimer, and norepinephrine (p < .035). Controlling for norepinephrine did not change the significance of the reported associations between social support and the coagulation measures D-dimer and fibrinogen. CONCLUSIONS: Our results suggest that lower social support is associated with greater coagulation activity before and after acute stress, which was unrelated to norepinephrine reactivity.