37 resultados para Trees loads
em CentAUR: Central Archive University of Reading - UK
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe. The implementation of these policies involves large expenditures, and it is reasonable for policymakers to ask what degree of certainty can be attached to the underlying critical load and exceedance estimates. This paper is a literature review of studies which attempt to estimate the uncertainty attached to critical loads. Critical load models and uncertainty analysis are briefly outlined. Most studies have used Monte Carlo analysis of some form to investigate the propagation of uncertainties in the definition of the input parameters through to uncertainties in critical loads. Though the input parameters are often poorly known, the critical load uncertainties are typically surprisingly small because of a "compensation of errors" mechanism. These results depend on the quality of the uncertainty estimates of the input parameters, and a "pedigree" classification for these is proposed. Sensitivity analysis shows that some input parameters are more important in influencing critical load uncertainty than others, but there have not been enough studies to form a general picture. Methods used for dealing with spatial variation are briefly discussed. Application of alternative models to the same site or modifications of existing models can lead to widely differing critical loads, indicating that research into the underlying science needs to continue.
Resumo:
This paper reports an uncertainty analysis of critical loads for acid deposition for a site in southern England, using the Steady State Mass Balance Model. The uncertainty bounds, distribution type and correlation structure for each of the 18 input parameters was considered explicitly, and overall uncertainty estimated by Monte Carlo methods. Estimates of deposition uncertainty were made from measured data and an atmospheric dispersion model, and hence the uncertainty in exceedance could also be calculated. The uncertainties of the calculated critical loads were generally much lower than those of the input parameters due to a "compensation of errors" mechanism - coefficients of variation ranged from 13% for CLmaxN to 37% for CL(A). With 1990 deposition, the probability that the critical load was exceeded was > 0.99; to reduce this probability to 0.50, a 63% reduction in deposition is required; to 0.05, an 82% reduction. With 1997 deposition, which was lower than that in 1990, exceedance probabilities declined and uncertainties in exceedance narrowed as deposition uncertainty had less effect. The parameters contributing most to the uncertainty in critical loads were weathering rates, base cation uptake rates, and choice of critical chemical value, indicating possible research priorities. However, the different critical load parameters were to some extent sensitive to different input parameters. The application of such probabilistic results to environmental regulation is discussed.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe and elsewhere. They are assessed by several elaborate and ingenious models, each of which requires many parameters, and have to be applied on a spatially-distributed basis. Often the values of the input parameters are poorly known, calling into question the validity of the calculated critical loads. This paper attempts to quantify the uncertainty in the critical loads due to this "parameter uncertainty", using examples from the UK. Models used for calculating critical loads for deposition of acidity and nitrogen in forest and heathland ecosystems were tested at four contrasting sites. Uncertainty was assessed by Monte Carlo methods. Each input parameter or variable was assigned a value, range and distribution in an objective a fashion as possible. Each model was run 5000 times at each site using parameters sampled from these input distributions. Output distributions of various critical load parameters were calculated. The results were surprising. Confidence limits of the calculated critical loads were typically considerably narrower than those of most of the input parameters. This may be due to a "compensation of errors" mechanism. The range of possible critical load values at a given site is however rather wide, and the tails of the distributions are typically long. The deposition reductions required for a high level of confidence that the critical load is not exceeded are thus likely to be large. The implication for pollutant regulation is that requiring a high probability of non-exceedance is likely to carry high costs. The relative contribution of the input variables to critical load uncertainty varied from site to site: any input variable could be important, and thus it was not possible to identify variables as likely targets for research into narrowing uncertainties. Sites where a number of good measurements of input parameters were available had lower uncertainties, so use of in situ measurement could be a valuable way of reducing critical load uncertainty at particularly valuable or disputed sites. From a restricted number of samples, uncertainties in heathland critical loads appear comparable to those of coniferous forest, and nutrient nitrogen critical loads to those of acidity. It was important to include correlations between input variables in the Monte Carlo analysis, but choice of statistical distribution type was of lesser importance. Overall, the analysis provided objective support for the continued use of critical loads in policy development. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
In this paper we describe a lightweight Web portal developed for running computational jobs on a IBM JS21 Bladecenter cluster, ThamesBlue, for inferring and analyzing evolutionary histories. We first discuss the need for leveraging HPC as a enabler for molecular phylogenetics research. We go on to describe how the portal is designed to interface with existing open-source software that is typical of a HPC resource configuration, and how by design this portal is generic enough to be portable to other similarly configured compute clusters, and for other applications.
Resumo:
Investigations were undertaken on the use of somatic embryogenesis to generate cocoa swollen shoot virus (CSSV) disease free clonal propagules, from infected trees. Polymerase chain reaction (PCR) capillary electrophoresis revealed the presence of CSSV in all the callus tissues induced from the CSSV-infected Amelonado cocoa trees (T1, T2 and T4). The virus was transmitted to primary somatic embryos induced from the infected callus tissues at the rate of 10 (19%), 18 (14%) and 16 (15%) for T1, T2 and T4, respectively. Virus free primary somatic embryos from the infected callus tissues converted into plantlets tested CSSV negative by PCR/capillary electrophoresis 2 years after weaning. Secondary somatic embryos induced from the CSSV-infected primary somatic embryos revealed the presence of viral fragments at the rate of 4 (4%) and 9 (9%) for T2 and T4, respectively. Real-time PCR revealed 23 of the 24 secondary somatic embryos contained no detectable virus. Based on these findings, it is proposed that progressive elimination of the CSSV in infected cocoa trees occurred from primary embryogenesis to secondary embryogenesis. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Shoot dieback is a problem in frequently trimmed Leyland hedges and is increasingly affecting gardeners’ choice of hedge trees, having a negative effect on a conifer nursery industry. Some damage can be attributed to the feeding by aphids, but it is unclear if there are also underlying physiological causes. In this study, we tested the hypothesis that shoot-clipping of conifer trees during adverse growing conditions (i.e. high air temperature and low soil moisture) could be leading to shoot ‘dieback’. Three-year-old Golden Leyland Cypress (x Cupressocyparis leylandii ‘Excalibur Gold’) plants were subjected to either a well-watered or droughted irrigation regime and placed in either a ‘hot’ (average day temperature = 40°C) or a ‘cool’ (average day temperature = 27°C) glasshouse compartment. Half of the plants from each glasshouse were clipped on Day 14 and again on Day 50. Measurements of soil moisture content (SMC), net CO2 assimilation rate (A), stomatal conductance (gs), branchlet xylem water potential (XWP), plant height and foliage colour were made. Within the clipped and unclipped treatments of both glasshouse compartments, plants from the droughted regime had significantly lower values for A, gs and XWP than those from the well-watered regime. However, there was no difference in these parameters between the hot and cool glasshouse compartments. The trends seen for A, gs and XWP of all treatments generally mirrored changes in SMC indicating a direct effect of water supply on these parameters. By the end of the experiment the overall foliage colour of plants from the hot glasshouse was darker than that of plants from the cool glasshouse and the overall foliage colour was also darker following shoot clipping. In general, shoot clipping led to increases in A, gs XWP and SMC. This may be due to the reduction in total leaf area leading to a greater supply of water for the remaining leaves. No shoot ‘dieback’ was observed in any treatment in response to drought stress or shoot-clipping.
Resumo:
Cocoa farms that had been treated and replanted in Ghana during the most recent phase of the cocoa swollen shoot virus (CSSV) eradication campaign were surveyed. Farms that were replanted close to adjoining old cocoa farms or which contained old trees were common in most (38) of the 41 cocoa farms surveyed. CSSV infections were apparent in 20 (53%) out of these 38 farms and they pose a serious risk of causing early infections of the re-planted farms. Control strategies that isolate the newly planted farms by a boundary of immune crops as barriers to reduce CSSV re-infection are discussed. (c) 2005 Elsevier Ltd. All rights reserved.