975 resultados para Role of aggregate uncertainty


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several studies using ocean–atmosphere general circulation models (GCMs) suggest that the atmospheric component plays a dominant role in the modelled El Niño-Southern Oscillation (ENSO). To help elucidate these findings, the two main atmosphere feedbacks relevant to ENSO, the Bjerknes positive feedback (μ) and the heat flux negative feedback (α), are here analysed in nine AMIP runs of the CMIP3 multimodel dataset. We find that these models generally have improved feedbacks compared to the coupled runs which were analysed in part I of this study. The Bjerknes feedback, μ, is increased in most AMIP runs compared to the coupled run counterparts, and exhibits both positive and negative biases with respect to ERA40. As in the coupled runs, the shortwave and latent heat flux feedbacks are the two dominant components of α in the AMIP runs. We investigate the mechanisms behind these two important feedbacks, in particular focusing on the strong 1997–1998 El Niño. Biases in the shortwave flux feedback, α SW, are the main source of model uncertainty in α. Most models do not successfully represent the negative αSW in the East Pacific, primarily due to an overly strong low-cloud positive feedback in the far eastern Pacific. Biases in the cloud response to dynamical changes dominate the modelled α SW biases, though errors in the large-scale circulation response to sea surface temperature (SST) forcing also play a role. Analysis of the cloud radiative forcing in the East Pacific reveals model biases in low cloud amount and optical thickness which may affect α SW. We further show that the negative latent heat flux feedback, α LH, exhibits less diversity than α SW and is primarily driven by variations in the near-surface specific humidity difference. However, biases in both the near-surface wind speed and humidity response to SST forcing can explain the inter-model αLH differences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many climate models have problems simulating Indian summer monsoon rainfall and its variability, resulting in considerable uncertainty in future projections. Problems may relate to many factors, such as local effects of the formulation of physical parametrisation schemes, while common model biases that develop elsewhere within the climate system may also be important. Here we examine the extent and impact of cold sea surface temperature (SST) biases developing in the northern Arabian Sea in the CMIP5 multi-model ensemble, where such SST biases are shown to be common. Such biases have previously been shown to reduce monsoon rainfall in the Met Office Unified Model (MetUM) by weakening moisture fluxes incident upon India. The Arabian Sea SST biases in CMIP5 models consistently develop in winter, via strengthening of the winter monsoon circulation, and persist into spring and summer. A clear relationship exists between Arabian Sea cold SST bias and weak monsoon rainfall in CMIP5 models, similar to effects in the MetUM. Part of this effect may also relate to other factors, such as forcing of the early monsoon by spring-time excessive equatorial precipitation. Atmosphere-only future time-slice experiments show that Arabian Sea cold SST biases have potential to weaken future monsoon rainfall increases by limiting moisture flux acceleration through non-linearity of the Clausius-Clapeyron relationship. Analysis of CMIP5 model future scenario simulations suggests that, while such effects are likely small compared to other sources of uncertainty, models with large Arabian Sea cold SST biases suppress the range of potential outcomes for changes to future early monsoon rainfall.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effects of different water application rates (3, 10, 15 and 30 mm/h) and of topsoil removal on the rate of downward water movement through the cryoturbated chalk zone in southern England were investigated in situ. During and after each application of water, changes in water content and matric potential of the profile were monitored and percolate was collected in troughs. The measured water breakthrough time showed that water moved to 1.2 m depth quickly (in 8.2 h) even with application rate as low as 3 mm/h and that the time was only 3 h when water was applied at a rate of 15 mm/ h. These breakthrough times were about 150 and 422 fold shorter, respectively, than those expected if the water had been conducted by the matrix alone. Percolate was collected in troughs within 3.5 h at 1.2 m depth when water was applied at 30 mm/h and the quantity collected indicated that a significant amount of the surface applied water moved downward through inter-aggregate pores. The small increase in volumetric water content (about 3%) in excess of matrix water content resulted in a large increase in pore water velocities, from 0.20 to 5.3 m/d. The presence of soil layer had effect on the time taken for water to travel through the cryoturbated chalk layer and in the soil layer, water took about 1-2 h to pass thorough, depending on the intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The occurrence of destructive mesoscale ‘polar low’ cyclones in the subpolar North Atlantic is projected to decline under anthropogenic change, due to an increase in atmospheric static stability. This letter reports on the role of changes in ocean circulation in shaping the atmospheric stability. In particular, the Atlantic Meridional Overturning Circulation (AMOC) is projected to weaken in response to anthropogenic forcing, leading to a local minimum in warming in this region. The reduced warming is restricted to the lower troposphere, hence contributing to the increase in static stability. Linear correlation analysis of the CMIP3 climate model ensemble suggests that around half of the model uncertainty in the projected stability response arises from the varied response of the AMOC between models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use both Granger-causality and instrumental variables (IV) methods to examine the impact of index fund positions on price returns for the main US grains and oilseed futures markets. Our analysis supports earlier conclusions that Granger-causal impacts are generally not discernible. However, market microstructure theory suggests trading impacts should be instantaneous. IV-based tests for contemporaneous causality provide stronger evidence of price impact. We find even stronger evidence that changes in index positions can help predict future changes in aggregate commodity price indices. This result suggests that changes in index investment are in part driven by information which predicts commodity price changes over the coming months.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sensitivity of solar irradiance at the surface to the variability of aerosol intensive optical properties is investigated for a site (Alta Floresta) in the southern portion of the Amazon basin using detailed comparisons between measured and modeled irradiances. Apart from aerosol intensive optical properties, specifically single scattering albedo (omega(o lambda)) and asymmetry parameter (g(lambda)), which were assumed constant, all other relevant input to the model were prescribed based on observation. For clean conditions, the differences between observed and modeled irradiances were consistent with instrumental uncertainty. For polluted conditions, the agreement was significantly worse, with a root mean square difference three times larger (23.5 Wm(-2)). Analysis revealed a noteworthy correlation between the irradiance differences (observed minus modeled) and the column water vapor (CWV) for polluted conditions. Positive differences occurred mostly in wet conditions, while the differences became more negative as the atmosphere dried. To explore the hypothesis that the irradiance differences might be linked to the modulation of omega(o lambda) and g(lambda) by humidity, AERONET retrievals of aerosol properties and CWV over the same site were analyzed. The results highlight the potential role of humidity in modifying omega(o lambda) and g(lambda) and suggest that to explain the relationship seen between irradiances differences via aerosols properties the focus has to be on humidity-dependent processes that affect particles chemical composition. Undoubtedly, there is a need to better understand the role of humidity in modifying the properties of smoke aerosols in the southern portion of the Amazon basin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the period of 1990-2002 US households experienced a dramatic wealth cycle, induced by a 369% appreciation in the value of real per capita liquid stock market assets followed by a 55% decline. However, consumer spending in real terms continued to rise throughout this period. Using data from 1990-2005, traditional life-cycle approaches to estimating macroeconomic wealth effects confront two puzzles: (i) econometric evidence of a stable cointegrating relationship among consumption, income, and wealth is weak at best; and (ii) life-cycle models that rely on aggregate measures of wealth cannot explain why consumption did not collapse when the value of stock market assets declined so dramatically. We address both puzzles by decomposing wealth according to the liquidity of household assets. We find that the significant appreciation in the value of real estate assets that occurred after the peak of the wealth cycle helped sustain consumer spending from 2001 to 2005.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change has resulted in substantial variations in annual extreme rainfall quantiles in different durations and return periods. Predicting the future changes in extreme rainfall quantiles is essential for various water resources design, assessment, and decision making purposes. Current Predictions of future rainfall extremes, however, exhibit large uncertainties. According to extreme value theory, rainfall extremes are rather random variables, with changing distributions around different return periods; therefore there are uncertainties even under current climate conditions. Regarding future condition, our large-scale knowledge is obtained using global climate models, forced with certain emission scenarios. There are widely known deficiencies with climate models, particularly with respect to precipitation projections. There is also recognition of the limitations of emission scenarios in representing the future global change. Apart from these large-scale uncertainties, the downscaling methods also add uncertainty into estimates of future extreme rainfall when they convert the larger-scale projections into local scale. The aim of this research is to address these uncertainties in future projections of extreme rainfall of different durations and return periods. We plugged 3 emission scenarios with 2 global climate models and used LARS-WG, a well-known weather generator, to stochastically downscale daily climate models’ projections for the city of Saskatoon, Canada, by 2100. The downscaled projections were further disaggregated into hourly resolution using our new stochastic and non-parametric rainfall disaggregator. The extreme rainfall quantiles can be consequently identified for different durations (1-hour, 2-hour, 4-hour, 6-hour, 12-hour, 18-hour and 24-hour) and return periods (2-year, 10-year, 25-year, 50-year, 100-year) using Generalized Extreme Value (GEV) distribution. By providing multiple realizations of future rainfall, we attempt to measure the extent of total predictive uncertainty, which is contributed by climate models, emission scenarios, and downscaling/disaggregation procedures. The results show different proportions of these contributors in different durations and return periods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a market where past-sales embed information about consumers’ tastes (quality), we analyze the seller’s incentives to invest in a costly advertising campaign to report them under two informational assumptions. In the …rst scenario, a pooling equilibrium with past-sales advertising is derived. Information revelation only occurs when the seller bene…ciates from the herding behaviour that the advertising campaign induces on the part of consumers. In the second informational regime, a separating equilibrium with past-sales advertising is computed. Information revelation always happens, either through prices or through costly advertisements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper aims at contributing to the research agenda on the sources of price stickiness, showing that the adoption of nominal price rigidity may be an optimal firms' reaction to the consumers' behavior, even if firms have no adjustment costs. With regular broadly accepted assumptions on economic agents behavior, we show that firms' competition can lead to the adoption of sticky prices as an (sub-game perfect) equilibrium strategy. We introduce the concept of a consumption centers model economy in which there are several complete markets. Moreover, we weaken some traditional assumptions used in standard monetary policy models, by assuming that households have imperfect information about the ineflicient time-varying cost shocks faced by the firms, e.g. the ones regarding to inefficient equilibrium output leveIs under fiexible prices. Moreover, the timing of events are assumed in such a way that, at every period, consumers have access to the actual prices prevailing in the market only after choosing a particular consumption center. Since such choices under uncertainty may decrease the expected utilities of risk averse consumers, competitive firms adopt some degree of price stickiness in order to minimize the price uncertainty and fi attract more customers fi.'

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sectoral policies make explicit and implicit assumptions about the behaviour and capabilities of the agents (such as dynamic responses to market signals, demand-led assistance, collaborative efforts, participation in financing); which we consider to be rather unrealistic. Because of this lack of realism, policies that aim to be neutral often turn out to be highly exclusive. They fail to give sufficient importance to the special features of the sector -with its high climatic, biological and commercial risks and its slow adaptation- or to the fact that those who take decisions in agriculture are now mostly in an inferior position because of their incomes below the poverty line, their inadequate training, their traditions based on centuries of living in precarious conditions, and their geographical location in marginal areas, far from infrastructure and with only a minimum of services and sources of information. These people have only scanty and imperfect access to the markets which, according to the prevailing model, should govern decisions and the (re);distribution of the factors of production. In our opinion, this explains the patchy and lower-than-expected growth registered by the sector after the reforms to promote the liberalization of markets and external openness in the region. In view of the results of the application of the new model, it may be wondered whether Latin America can afford a form of development which excludes over half of its agricultural producers; what the alternatives are; and what costs and benefits each of them offers in terms of production and monetary, social, spatial and other aspects. The article outlines the changes in policies and their results at the aggregate level, summarizes the arguments usually put forward to explain agricultural performance in the region, and proposes a second set of explanations based on a description of the agents and the responses that may be expected from them, contrasting the latter with the supposedly neutral nature of the policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most recently discussion about the optimal treatment for different subsets of patients suffering from coronary artery disease has re-emerged, mainly because of the uncertainty caused by doctors and patients regarding the phenomenon of unpredictable early and late stent thrombosis. Surgical revascularization using multiple arterial bypass grafts has repeatedly proven its superiority compared to percutaneous intervention techniques, especially in patients suffering from left main stem disease and coronary 3-vessels disease. Several prospective randomized multicenter studies comparing early and mid-term results following PCI and CABG have been really restrictive, with respect to patient enrollment, with less than 5% of all patients treated during the same time period been enrolled. Coronary artery bypass grafting allows the most complete revascularization in one session, because all target coronary vessels larger than 1 mm can be bypassed in their distal segments. Once the patient has been turn-off for surgery, surgeons have to consider the most complete arterial revascularization in order to decrease the long-term necessity for re-revascularization; for instance patency rate of the left internal thoracic artery grafted to the distal part left anterior descending artery may be as high as 90-95% after 10 to 15 years. Early mortality following isolated CABG operation has been as low as 0.6 to 1% in the most recent period (reports from the University Hospital Berne and the University Hospital of Zurich); beside these excellent results, the CABG option seems to be less expensive than PCI with time, since the necessity for additional PCI is rather high following initial PCI, and the price of stent devices is still very high, particularly in Switzerland. Patients, insurance and experts in health care should be better and more honestly informed concerning the risk and costs of PCI and CABG procedures as well as about the much higher rate of subsequent interventions following PCI. Team approach for all patients in whom both options could be offered seems mandatory to avoid unbalanced information of the patients. Looking at the recent developments in transcatheter valve treatments, the revival of cardiological-cardiosurgical conferences seems to a good option to optimize the cooperation between the two medical specialties: cardiology and cardiac surgery.