974 resultados para Mismatched uncertainties
Resumo:
Uncertainties associated with the representation of various physical processes in global climate models (GCMs) mean that, when projections from GCMs are used in climate change impact studies, the uncertainty propagates through to the impact estimates. A complete treatment of this ‘climate model structural uncertainty’ is necessary so that decision-makers are presented with an uncertainty range around the impact estimates. This uncertainty is often underexplored owing to the human and computer processing time required to perform the numerous simulations. Here, we present a 189-member ensemble of global river runoff and water resource stress simulations that adequately address this uncertainty. Following several adaptations and modifications, the ensemble creation time has been reduced from 750 h on a typical single-processor personal computer to 9 h of high-throughput computing on the University of Reading Campus Grid. Here, we outline the changes that had to be made to the hydrological impacts model and to the Campus Grid, and present the main results. We show that, although there is considerable uncertainty in both the magnitude and the sign of regional runoff changes across different GCMs with climate change, there is much less uncertainty in runoff changes for regions that experience large runoff increases (e.g. the high northern latitudes and Central Asia) and large runoff decreases (e.g. the Mediterranean). Furthermore, there is consensus that the percentage of the global population at risk to water resource stress will increase with climate change.
Resumo:
At present, there is much anxiety regarding the security of energy supplies; for example, the UK and other European States are set to become increasingly dependant upon imports of natural gas from states with which political relations are often strained. These uncertainties are felt acutely by the electricity generating sector, which is facing major challenges regarding the choice of fuel mix in the years ahead. Nuclear energy may provide an alternative; however, in the UK, progress in replacing the first generation reactors is exceedingly slow. A number of operators are looking to coal as a means of plugging the energy gap. However, in the light of ever more stringent legal controls on emissions, this step cannot be taken without the adoption of sophisticated pollution abatement technology. This article examines the role which legal concepts such as Best Available Techniques (BAT) must play in bringing about these changes.
Resumo:
The International System of Units (SI) is founded on seven base units, the metre, kilogram, second, ampere, kelvin, mole and candela corresponding to the seven base quantities of length, mass, time, electric current, thermodynamic temperature, amount of substance and luminous intensity. At its 94th meeting in October 2005, the International Committee for Weights and Measures (CIPM) adopted a recommendation on preparative steps towards redefining the kilogram, ampere, kelvin and mole so that these units are linked to exactly known values of fundamental constants. We propose here that these four base units should be given new definitions linking them to exactly defined values of the Planck constant h, elementary charge e, Boltzmann constant k and Avogadro constant NA, respectively. This would mean that six of the seven base units of the SI would be defined in terms of true invariants of nature. In addition, not only would these four fundamental constants have exactly defined values but also the uncertainties of many of the other fundamental constants of physics would be either eliminated or appreciably reduced. In this paper we present the background and discuss the merits of these proposed changes, and we also present possible wordings for the four new definitions. We also suggest a novel way to define the entire SI explicitly using such definitions without making any distinction between base units and derived units. We list a number of key points that should be addressed when the new definitions are adopted by the General Conference on Weights and Measures (CGPM), possibly by the 24th CGPM in 2011, and we discuss the implications of these changes for other aspects of metrology.
Resumo:
The kilogram, the base unit of mass in the International System of Units (SI), is defined as the mass m(K) of the international prototype of the kilogram. Clearly, this definition has the effect of fixing the value of m(K) to be one kilogram exactly. In this paper, we review the benefits that would accrue if the kilogram were redefined so as to fix the value of either the Planck constant h or the Avogadro constant NA instead of m(K), without waiting for the experiments to determine h or NA currently underway to reach their desired relative standard uncertainty of about 10−8. A significant reduction in the uncertainties of the SI values of many other fundamental constants would result from either of these new definitions, at the expense of making the mass m(K) of the international prototype a quantity whose value would have to be determined by experiment. However, by assigning a conventional value to m(K), the present highly precise worldwide uniformity of mass standards could still be retained. The advantages of redefining the kilogram immediately outweigh any apparent disadvantages, and we review the alternative forms that a new definition might take.
Resumo:
There has been considerable discussion about the merits of redefining four of the base units of the SI, including the mole. In this paper, the options for implementing a new definition for the mole based on a fixed value for the Avogadro constant are discussed. They are placed in the context of the macroscopic nature of the quantity amount of substance and the opportunity to introduce a system for molar and atomic masses with unchanged values and consistent relative uncertainties.
Resumo:
We report here top-down emissions estimates for an African megacity. A boundary layer circumnavigation of Lagos, Nigeria was completed using the FAAM BAe146 aircraft as part of the AMMA project. These observations together with an inferred boundary layer height allow the flux of pollutants to be calculated. Extrapolation gives annual emissions for CO, NOx, and VOCs of 1.44 Tg yr−1, 0.03 Tg yr−1 and 0.37 Tg yr−1 respectively with uncertainties of +250/−60%. These inferred emissions are consistent with bottom-up estimates for other developing megacities and are attributed to the evaporation of fuels, mobile combustion and natural gas emissions.
Resumo:
Measurements of anthropogenic tracers such as chlorofluorocarbons and tritium must be quantitatively combined with ocean general circulation models as a component of systematic model development. The authors have developed and tested an inverse method, using a Green's function, to constrain general circulation models with transient tracer data. Using this method chlorofluorocarbon-11 and -12 (CFC-11 and -12) observations are combined with a North Atlantic configuration of the Miami Isopycnic Coordinate Ocean Model with 4/3 degrees resolution. Systematic differences can be seen between the observed CFC concentrations and prior CFC fields simulated by the model. These differences are reduced by the inversion, which determines the optimal gas transfer across the air-sea interface, accounting for uncertainties in the tracer observations. After including the effects of unresolved variability in the CFC fields, the model is found to be inconsistent with the observations because the model/data misfit slightly exceeds the error estimates. By excluding observations in waters ventilated north of the Greenland-Scotland ridge (sigma (0) < 27.82 kg m(-3); shallower than about 2000 m), the fit is improved, indicating that the Nordic overflows are poorly represented in the model. Some systematic differences in the model/data residuals remain and are related, in part, to excessively deep model ventilation near Rockall and deficient ventilation in the main thermocline of the eastern subtropical gyre. Nevertheless, there do not appear to be gross errors in the basin-scale model circulation. Analysis of the CFC inventory using the constrained model suggests that the North Atlantic Ocean shallower than about 2000 m was near 20% saturated in the mid-1990s. Overall, this basin is a sink to 22% of the total atmosphere-to-ocean CFC-11 flux-twice the global average value. The average water mass formation rates over the CFC transient are 7.0 and 6.0 Sv (Sv = 10(6) m(3) s(-1)) for subtropical mode water and subpolar mode water, respectively.
Resumo:
The year 2000 radiative forcing (RF) due to changes in O3 and CH4 (and the CH4-induced stratospheric water vapour) as a result of emissions of short-lived gases (oxides of nitrogen (NOx), carbon monoxide and non-methane hydrocarbons) from three transport sectors (ROAD, maritime SHIPping and AIRcraft) are calculated using results from five global atmospheric chemistry models. Using results from these models plus other published data, we quantify the uncertainties. The RF due to short-term O3 changes (i.e. as an immediate response to the emissions without allowing for the long-term CH4 changes) is positive and highest for ROAD transport (31mWm-2) compared to SHIP (24 mWm-2) and AIR (17 mWm-2) sectors in four of the models. All five models calculate negative RF from the CH4 perturbations, with a larger impact from the SHIP sector than for ROAD and AIR. The net RF of O3 and CH4 combined (i.e. including the impact of CH4 on ozone and stratospheric water vapour) is positive for ROAD (+16(±13)(one standard deviation) mWm-2) and AIR (+6(±5) mWm-2) traffic sectors and is negative for SHIP (-18(±10) mWm-2) sector in all five models. Global Warming Potentials (GWP) and Global Temperature change Potentials (GTP) are presented for AIR NOx emissions; there is a wide spread in the results from the 5 chemistry models, and it is shown that differences in the methane response relative to the O3 response drive much of the spread.
Resumo:
Process-based integrated modelling of weather and crop yield over large areas is becoming an important research topic. The production of the DEMETER ensemble hindcasts of weather allows this work to be carried out in a probabilistic framework. In this study, ensembles of crop yield (groundnut, Arachis hypogaea L.) were produced for 10 2.5 degrees x 2.5 degrees grid cells in western India using the DEMETER ensembles and the general large-area model (GLAM) for annual crops. Four key issues are addressed by this study. First, crop model calibration methods for use with weather ensemble data are assessed. Calibration using yield ensembles was more successful than calibration using reanalysis data (the European Centre for Medium-Range Weather Forecasts 40-yr reanalysis, ERA40). Secondly, the potential for probabilistic forecasting of crop failure is examined. The hindcasts show skill in the prediction of crop failure, with more severe failures being more predictable. Thirdly, the use of yield ensemble means to predict interannual variability in crop yield is examined and their skill assessed relative to baseline simulations using ERA40. The accuracy of multi-model yield ensemble means is equal to or greater than the accuracy using ERA40. Fourthly, the impact of two key uncertainties, sowing window and spatial scale, is briefly examined. The impact of uncertainty in the sowing window is greater with ERA40 than with the multi-model yield ensemble mean. Subgrid heterogeneity affects model accuracy: where correlations are low on the grid scale, they may be significantly positive on the subgrid scale. The implications of the results of this study for yield forecasting on seasonal time-scales are as follows. (i) There is the potential for probabilistic forecasting of crop failure (defined by a threshold yield value); forecasting of yield terciles shows less potential. (ii) Any improvement in the skill of climate models has the potential to translate into improved deterministic yield prediction. (iii) Whilst model input uncertainties are important, uncertainty in the sowing window may not require specific modelling. The implications of the results of this study for yield forecasting on multidecadal (climate change) time-scales are as follows. (i) The skill in the ensemble mean suggests that the perturbation, within uncertainty bounds, of crop and climate parameters, could potentially average out some of the errors associated with mean yield prediction. (ii) For a given technology trend, decadal fluctuations in the yield-gap parameter used by GLAM may be relatively small, implying some predictability on those time-scales.
Resumo:
Reanalysis data provide an excellent test bed for impacts prediction systems. because they represent an upper limit on the skill of climate models. Indian groundnut (Arachis hypogaea L.) yields have been simulated using the General Large-Area Model (GLAM) for annual crops and the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr reanalysis (ERA-40). The ability of ERA-40 to represent the Indian summer monsoon has been examined. The ability of GLAM. when driven with daily ERA-40 data, to model both observed yields and observed relationships between subseasonal weather and yield has been assessed. Mean yields "were simulated well across much of India. Correlations between observed and modeled yields, where these are significant. are comparable to correlations between observed yields and ERA-40 rainfall. Uncertainties due to the input planting window, crop duration, and weather data have been examined. A reduction in the root-mean-square error of simulated yields was achieved by applying bias correction techniques to the precipitation. The stability of the relationship between weather and yield over time has been examined. Weather-yield correlations vary on decadal time scales. and this has direct implications for the accuracy of yield simulations. Analysis of the skewness of both detrended yields and precipitation suggest that nonclimatic factors are partly responsible for this nonstationarity. Evidence from other studies, including data on cereal and pulse yields, indicates that this result is not particular to groundnut yield. The detection and modeling of nonstationary weather-yield relationships emerges from this study as an important part of the process of understanding and predicting the impacts of climate variability and change on crop yields.
Resumo:
Bovine tuberculosis (TB) is an important economic problem. The incidence of TB in cattle herds has steadily risen in the UK, and badgers are strongly implicated in spreading disease. Since the mid-1970s the UK government has adopted a number of badger culling strategies to attempt to reduce infection in cattle. In this report, an established model has been used to simulate TB in badgers, transmission to cattle, and control by badger culling. Costs were supplied by the UK Government's Department for Environment Food and Rural Affairs (Defra) for badger trapping and gassing. Regardless of culling intensity or area simulated, an overall reduction in the herd breakdown rate was seen. With a high culling efficacy and no social perturbation, the mean Net Present Value of a few simulated culling strategies in an "ideal world" was positive, meaning the economic benefits outweighed the costs. Further work is required before these results could be considered definitive, as it is necessary to evaluate uncertainties and simulate less than perfect conditions. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The primary purpose of this study was to model the partitioning of evapotranspiration in a maize-sunflower intercrop at various canopy covers. The Shuttleworth-Wallace (SW) model was extended for intercropping systems to include both crop transpiration and soil evaporation and allowing interaction between the two. To test the accuracy of the extended SW model, two field experiments of maize-sunflower intercrop were conducted in 1998 and 1999. Plant transpiration and soil evaporation were measured using sap flow gauges and lysimeters, respectively. The mean prediction error (simulated minus measured values) for transpiration was zero (which indicated no overall bias in estimation error), and its accuracy was not affected by the plant growth stages, but simulated transpiration during high measured transpiration rates tended to be slightly underestimated. Overall, the predictions for daily soil evaporation were also accurate. Model estimation errors were probably due to the simplified modelling of soil water content, stomatal resistances and soil heat flux as well as due to the uncertainties in characterising the 2 micrometeorological conditions. The SW’s prediction of transpiration was most sensitive to parameters most directly related to the canopy characteristics such as the partitioning of captured solar radiation, canopy resistance, and bulk boundary layer resistance.
Resumo:
Gas-phase electron diffraction (GED) data together with results from ab initio molecular orbital calculations (HF and MP2/6-311+G(d,p)) have been used to determine the structure of hexamethyldigermane ((CH3)3Ge-Ge(CH3)3). The equilibrium symmetry is D3d, but the molecule has a very low-frequency, largeamplitude, torsional mode (φCGeGeC) that lowers the thermal average symmetry. The effect of this largeamplitude mode on the interatomic distances was described by a dynamic model which consisted of a set of pseudoconformers spaced at even intervals. The amount of each pseudoconformer was obtained from the ab initio calculations (HF/6-311+G(d,p)). The results for the principal distances (ra) and angles (∠h1) obtained from the combined GED/ab initio (with estimated 1σ uncertainties) are r(Ge-Ge) ) 2.417(2) Å, r(Ge-C) ) 1.956(1) Å, r(C-H) ) 1.097(5) Å, ∠GeGeC ) 110.5(2)°, and ∠GeCH ) 108.8(6)°. Theoretical calculations were performed for the related molecules ((CH3)3Si-Si(CH3)3 and (CH3)3C-C(CH3)3).
Resumo:
Objective: To determine the risk of lung cancer associated with exposure at home to the radioactive disintegration products of naturally Occurring radon gas. Design: Collaborative analysis of individual data from 13 case-control studies of residential radon and lung cancer. Setting Nine European countries. Subjects 7148 cases Of lung cancer and 14 208 controls. Main outcome measures: Relative risks of lung cancer and radon gas concentrations in homes inhabited during the previous 5-34 years measured in becquerels (radon disintegrations per second) per cubic incite (Bq/m(3)) Of household air. Results: The mean measured radon concentration in homes of people in tire control group was 97 Bq/m(3), with 11% measuring > 200 and 4% measuring > 400 Bq/m(3). For cases of lung cancer the mean concentration was 104 Bq/m(3). The risk of lung cancer increased by 8.4% (95% confidence interval 3.0% to 15.8%) per 100 Bq/m(3) increase in measured radon (P = 0.0007). This corresponds to an increase of 16% (5% to 31%) per 100 Bq/m(3) increase in usual radon-that is, after correction for the dilution caused by random uncertainties in measuring radon concentrations. The dose-response relation seemed to be linear with no threshold and remained significant (P=0.04) in analyses limited to individuals from homes with measured radon < 200 Bq/m(3). The proportionate excess risk did not differ significantly with study, age, sex, or smoking. In the absence of other causes of death, the absolute risks of lung cancer by age 75 years at usual radon concentrations of 0, 100, and 400 Bq/m(3) would be about 0.4%, 0.5%, and 0.7%, respectively, for lifelong non-smokers, and about 25 times greater (10%, 12%, and 16%) for cigarette smokers. Conclusions: Collectively, though not separately, these studies show appreciable hazards from residential radon, particularly for smokers and recent ex-smokers, and indicate that it is responsible for about 2% of all deaths from cancer in Europe.