173 resultados para Load-increment sensitivity
Resumo:
The Atlantic meridional overturning circulation (AMOC) is an important component of the climate system. Models indicate that the AMOC can be perturbed by freshwater forcing in the North Atlantic. Using an ocean-atmosphere general circulation model, we investigate the dependence of such a perturbation of the AMOC, and the consequent climate change, on the region of freshwater forcing. A wide range of changes in AMOC strength is found after 100 years of freshwater forcing. The largest changes in AMOC strength occur when the regions of deepwater formation in the model are forced directly, although reductions in deepwater formation in one area may be compensated by enhanced formation elsewhere. North Atlantic average surface air temperatures correlate linearly with the AMOC decline, but warming may occur in localised regions, notably over Greenland and where deepwater formation is enhanced. This brings into question the representativeness of temperature changes inferred from Greenland ice-core records.
Resumo:
A high resolution regional atmosphere model is used to investigate the sensitivity of the North Atlantic storm track to the spatial and temporal resolution of the sea surface temperature (SST) data used as a lower boundary condition. The model is run over an unusually large domain covering all of the North Atlantic and Europe, and is shown to produce a very good simulation of the observed storm track structure. The model is forced at the lateral boundaries with 15–20 years of data from the ERA-40 reanalysis, and at the lower boundary by SST data of differing resolution. The impacts of increasing spatial and temporal resolution are assessed separately, and in both cases increasing the resolution leads to subtle, but significant changes in the storm track. In some, but not all cases these changes act to reduce the small storm track biases seen in the model when it is forced with low-resolution SSTs. In addition there are several clear mesoscale responses to increased spatial SST resolution, with surface heat fluxes and convective precipitation increasing by 10–20% along the Gulf Stream SST gradient.
Resumo:
The study reported presents the findings relating to commercial growing of genetically-modified Bt cotton in South Africa by a large sample of smallholder farmers over three seasons (1998/99, 1999/2000, 2000/01) following adoption. The analysis presents constructs and compares groupwise differences for key variables in Bt v. non-Bt technology and uses regressions to further analyse the production and profit impacts of Bt adoption. Analysis of the distribution of benefits between farmers due to the technology is also presented. In parallel with these socio-economic measures, the toxic loads being presented to the environment following the introduction of Bt cotton are monitored in terms of insecticide active ingredient (ai) and the Biocide Index. The latter adjusts ai to allow for differing persistence and toxicity of insecticides. Results show substantial and significant financial benefits to smallholder cotton growers of adopting Bt cotton over three seasons in terms of increased yields, lower insecticide spray costs and higher gross margins. This includes one particularly wet, poor growing season. In addition, those with the smaller holdings appeared to benefit proportionately more from the technology (in terms of higher gross margins) than those with larger holdings. Analysis using the Gini-coefficient suggests that the Bt technology has helped to reduce inequality amongst smallholder cotton growers in Makhathini compared to what may have been the position if they had grown conventional cotton. However, while Bt growers applied lower amounts of insecticide and had lower Biocide Indices (per ha) than growers of non-Bt cotton, some of this advantage was due to a reduction in non-bollworm insecticide. Indeed, the Biocide Index for all farmers in the population actually increased with the introduction of Bt cotton. The results indicate the complexity of such studies on the socio-economic and environmental impacts of GM varieties in the developing world.
Resumo:
The aim of this work is to study the hydrochemical variations during flood events in the Rio Tinto, SW Spain. Three separate rainfall/flood events were monitored in October 2004 following the dry season. In general, concentrations markedly increased following the first event (Fe from 99 to 1130 mg/L; Q(max) = 0.78 m(3)/s) while dissolved loads peaked in the second event (Fe = 7.5 kg/s, Cu = 0.83 kg/s, Zn = 0.82 kg/s; Q(max) = 77 m(3)/s) and discharge in the third event (Q(max) = 127 m(3)/s). This pattern reflects a progressive depletion of metals and sulphate stored in the dry summer as soluble evaporitic salt minerals and concentrated pore fluids, with dilution by freshwater becoming increasingly dominant as the month progressed. Variations in relative concentrations were attributed to oxyhydroxysulphate Fe precipitation, to relative changes in the sources of acid mine drainage (e.g. salt minerals, mine tunnels, spoil heaps etc.) and to differences in the rainfall distributions along the catchment. The contaminant load carried by the river during October 2004 was enormous, totalling some 770 t of Fe, 420 t of Al, 100 t of Cu, 100 t of Zn and 71 t of Mn. This represents the largest recorded example of this flush-out process in an acid mine drainage setting. Approximately 1000 times more water and 1408 200 times more dissolved elements were carried by the river during October 2004 than during the dry, low-flow conditions of September 2004, highlighting the key role of flood Events in the annual pollutant transport budget of semi-arid and and systems and the need to monitor these events in detail in order to accurately quantify pollutant transport. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Synthetic aperture radar (SAR) data have proved useful in remote sensing studies of deserts, enabling different surfaces to be discriminated by differences in roughness properties. Roughness is characterized in SAR backscatter models using the standard deviation of surface heights (sigma), correlation length (L) and autocorrelation function (rho(xi)). Previous research has suggested that these parameters are of limited use for characterizing surface roughness, and are often unreliable due to the collection of too few roughness profiles, or under-sampling in terms of resolution or profile length (L-p). This paper reports on work aimed at establishing the effects of L-p and sampling resolution on SAR backscatter estimations and site discrimination. Results indicate significant relationships between the average roughness parameters and L-p, but large variability in roughness parameters prevents any clear understanding of these relationships. Integral equation model simulations demonstrate limited change with L-p and under-estimate backscatter relative to SAR observations. However, modelled and observed backscatter conform in pattern and magnitude for C-band systems but not for L-band data. Variation in surface roughness alone does not explain variability in site discrimination. Other factors (possibly sub-surface scattering) appear to play a significant role in controlling backscatter characteristics at lower frequencies.
Resumo:
A new dynamic model of water quality, Q(2), has recently been developed, capable of simulating large branched river systems. This paper describes the application of a generalized sensitivity analysis (GSA) to Q(2) for single reaches of the River Thames in southern England. Focusing on the simulation of dissolved oxygen (DO) (since this may be regarded as a proxy for the overall health of a river); the GSA is used to identify key parameters controlling model behavior and provide a probabilistic procedure for model calibration. It is shown that, in the River Thames at least, it is more important to obtain high quality forcing functions than to obtain improved parameter estimates once approximate values have been estimated. Furthermore, there is a need to ensure reasonable simulation of a range of water quality determinands, since a focus only on DO increases predictive uncertainty in the DO simulations. The Q(2) model has been applied here to the River Thames, but it has a broad utility for evaluating other systems in Europe and around the world.
Resumo:
Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling of cyanobacteria in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes and rivers. A new deterministic-mathematical model was developed, which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the major factors that affect the cyanobacterial bloom formation in rivers including, light, nutrients and temperature. A technique called generalised sensitivity analysis was applied to the model to identify the critical parameter uncertainties in the model and investigates the interaction between the chosen parameters of the model. The result of the analysis suggested that 8 out of 12 parameters were significant in obtaining the observed cyanobacterial behaviour in a simulation. It was found that there was a high degree of correlation between the half-saturation rate constants used in the model.
Resumo:
Models developed to identify the rates and origins of nutrient export from land to stream require an accurate assessment of the nutrient load present in the water body in order to calibrate model parameters and structure. These data are rarely available at a representative scale and in an appropriate chemical form except in research catchments. Observational errors associated with nutrient load estimates based on these data lead to a high degree of uncertainty in modelling and nutrient budgeting studies. Here, daily paired instantaneous P and flow data for 17 UK research catchments covering a total of 39 water years (WY) have been used to explore the nature and extent of the observational error associated with nutrient flux estimates based on partial fractions and infrequent sampling. The daily records were artificially decimated to create 7 stratified sampling records, 7 weekly records, and 30 monthly records from each WY and catchment. These were used to evaluate the impact of sampling frequency on load estimate uncertainty. The analysis underlines the high uncertainty of load estimates based on monthly data and individual P fractions rather than total P. Catchments with a high baseflow index and/or low population density were found to return a lower RMSE on load estimates when sampled infrequently than those with a tow baseflow index and high population density. Catchment size was not shown to be important, though a limitation of this study is that daily records may fail to capture the full range of P export behaviour in smaller catchments with flashy hydrographs, leading to an underestimate of uncertainty in Load estimates for such catchments. Further analysis of sub-daily records is needed to investigate this fully. Here, recommendations are given on load estimation methodologies for different catchment types sampled at different frequencies, and the ways in which this analysis can be used to identify observational error and uncertainty for model calibration and nutrient budgeting studies. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
There are now considerable expectations that semi-distributed models are useful tools for supporting catchment water quality management. However, insufficient attention has been given to evaluating the uncertainties inherent to this type of model, especially those associated with the spatial disaggregation of the catchment. The Integrated Nitrogen in Catchments model (INCA) is subjected to an extensive regionalised sensitivity analysis in application to the River Kennet, part of the groundwater-dominated upper Thames catchment, UK The main results are: (1) model output was generally insensitive to land-phase parameters, very sensitive to groundwater parameters, including initial conditions, and significantly sensitive to in-river parameters; (2) INCA was able to produce good fits simultaneously to the available flow, nitrate and ammonium in-river data sets; (3) representing parameters as heterogeneous over the catchment (206 calibrated parameters) rather than homogeneous (24 calibrated parameters) produced a significant improvement in fit to nitrate but no significant improvement to flow and caused a deterioration in ammonium performance; (4) the analysis indicated that calibrating the flow-related parameters first, then calibrating the remaining parameters (as opposed to calibrating all parameters together) was not a sensible strategy in this case; (5) even the parameters to which the model output was most sensitive suffered from high uncertainty due to spatial inconsistencies in the estimated optimum values, parameter equifinality and the sampling error associated with the calibration method; (6) soil and groundwater nutrient and flow data are needed to reduce. uncertainty in initial conditions, residence times and nitrogen transformation parameters, and long-term historic data are needed so that key responses to changes in land-use management can be assimilated. The results indicate the general, difficulty of reconciling the questions which catchment nutrient models are expected to answer with typically limited data sets and limited knowledge about suitable model structures. The results demonstrate the importance of analysing semi-distributed model uncertainties prior to model application, and illustrate the value and limitations of using Monte Carlo-based methods for doing so. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe. The implementation of these policies involves large expenditures, and it is reasonable for policymakers to ask what degree of certainty can be attached to the underlying critical load and exceedance estimates. This paper is a literature review of studies which attempt to estimate the uncertainty attached to critical loads. Critical load models and uncertainty analysis are briefly outlined. Most studies have used Monte Carlo analysis of some form to investigate the propagation of uncertainties in the definition of the input parameters through to uncertainties in critical loads. Though the input parameters are often poorly known, the critical load uncertainties are typically surprisingly small because of a "compensation of errors" mechanism. These results depend on the quality of the uncertainty estimates of the input parameters, and a "pedigree" classification for these is proposed. Sensitivity analysis shows that some input parameters are more important in influencing critical load uncertainty than others, but there have not been enough studies to form a general picture. Methods used for dealing with spatial variation are briefly discussed. Application of alternative models to the same site or modifications of existing models can lead to widely differing critical loads, indicating that research into the underlying science needs to continue.
Resumo:
In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids.
Resumo:
In this paper, we present a distributed computing framework for problems characterized by a highly irregular search tree, whereby no reliable workload prediction is available. The framework is based on a peer-to-peer computing environment and dynamic load balancing. The system allows for dynamic resource aggregation, does not depend on any specific meta-computing middleware and is suitable for large-scale, multi-domain, heterogeneous environments, such as computational Grids. Dynamic load balancing policies based on global statistics are known to provide optimal load balancing performance, while randomized techniques provide high scalability. The proposed method combines both advantages and adopts distributed job-pools and a randomized polling technique. The framework has been successfully adopted in a parallel search algorithm for subgraph mining and evaluated on a molecular compounds dataset. The parallel application has shown good calability and close-to linear speedup in a distributed network of workstations.