82 resultados para sequential niche technique
em CentAUR: Central Archive University of Reading - UK
Resumo:
We present some additions to a fuzzy variable radius niche technique called Dynamic Niche Clustering (DNC) (Gan and Warwick, 1999; 2000; 2001) that enable the identification and creation of niches of arbitrary shape through a mechanism called Niche Linkage. We show that by using this mechanism it is possible to attain better feature extraction from the underlying population.
Resumo:
In this paper, a continuation of a variable radius niche technique called Dynamic Niche Clustering developed by (Gan & Warwick, 1999) is presented. The technique employs a separate dynamic population of overlapping niches that coexists alongside the normal population. An empirical analysis of the updated methodology on a large group of standard optimisation test-bed functions is also given. The technique is shown to perform almost as well as standard fitness sharing with regards to stability and the accuracy of peak identification, but it outperforms standard fitness sharing with regards to time complexity. It is also shown that the technique is capable of forming niches of varying size depending on the characteristics of the underlying peak that the niche is populating.
Resumo:
This paper describes the recent developments and improvements made to the variable radius niching technique called Dynamic Niche Clustering (DNC). DNC is fitness sharing based technique that employs a separate population of overlapping fuzzy niches with independent radii which operate in the decoded parameter space, and are maintained alongside the normal GA population. We describe a speedup process that can be applied to the initial generation which greatly reduces the complexity of the initial stages. A split operator is also introduced that is designed to counteract the excessive growth of niches, and it is shown that this improves the overall robustness of the technique. Finally, the effect of local elitism is documented and compared to the performance of the basic DNC technique on a selection of 2D test functions. The paper is concluded with a view to future work to be undertaken on the technique.
Resumo:
A significant challenge in the prediction of climate change impacts on ecosystems and biodiversity is quantifying the sources of uncertainty that emerge within and between different models. Statistical species niche models have grown in popularity, yet no single best technique has been identified reflecting differing performance in different situations. Our aim was to quantify uncertainties associated with the application of 2 complimentary modelling techniques. Generalised linear mixed models (GLMM) and generalised additive mixed models (GAMM) were used to model the realised niche of ombrotrophic Sphagnum species in British peatlands. These models were then used to predict changes in Sphagnum cover between 2020 and 2050 based on projections of climate change and atmospheric deposition of nitrogen and sulphur. Over 90% of the variation in the GLMM predictions was due to niche model parameter uncertainty, dropping to 14% for the GAMM. After having covaried out other factors, average variation in predicted values of Sphagnum cover across UK peatlands was the next largest source of variation (8% for the GLMM and 86% for the GAMM). The better performance of the GAMM needs to be weighed against its tendency to overfit the training data. While our niche models are only a first approximation, we used them to undertake a preliminary evaluation of the relative importance of climate change and nitrogen and sulphur deposition and the geographic locations of the largest expected changes in Sphagnum cover. Predicted changes in cover were all small (generally <1% in an average 4 m2 unit area) but also highly uncertain. Peatlands expected to be most affected by climate change in combination with atmospheric pollution were Dartmoor, Brecon Beacons and the western Lake District.
Resumo:
This paper describes a method for dynamic data reconciliation of nonlinear systems that are simulated using the sequential modular approach, and where individual modules are represented by a class of differential algebraic equations. The estimation technique consists of a bank of extended Kalman filters that are integrated with the modules. The paper reports a study based on experimental data obtained from a pilot scale mixing process.
Resumo:
Radiometric data in the visible domain acquired by satellite remote sensing have proven to be powerful for monitoring the states of the ocean, both physical and biological. With the help of these data it is possible to understand certain variations in biological responses of marine phytoplankton on ecological time scales. Here, we implement a sequential data-assimilation technique to estimate from a conventional nutrient–phytoplankton–zooplankton (NPZ) model the time variations of observed and unobserved variables. In addition, we estimate the time evolution of two biological parameters, namely, the specific growth rate and specific mortality of phytoplankton. Our study demonstrates that: (i) the series of time-varying estimates of specific growth rate obtained by sequential data assimilation improves the fitting of the NPZ model to the satellite-derived time series: the model trajectories are closer to the observations than those obtained by implementing static values of the parameter; (ii) the estimates of unobserved variables, i.e., nutrient and zooplankton, obtained from an NPZ model by implementation of a pre-defined parameter evolution can be different from those obtained on applying the sequences of parameters estimated by assimilation; and (iii) the maximum estimated specific growth rate of phytoplankton in the study area is more sensitive to the sea-surface temperature than would be predicted by temperature-dependent functions reported previously. The overall results of the study are potentially useful for enhancing our understanding of the biological response of phytoplankton in a changing environment.
Resumo:
Recruitment of patients to a clinical trial usually occurs over a period of time, resulting in the steady accumulation of data throughout the trial's duration. Yet, according to traditional statistical methods, the sample size of the trial should be determined in advance, and data collected on all subjects before analysis proceeds. For ethical and economic reasons, the technique of sequential testing has been developed to enable the examination of data at a series of interim analyses. The aim is to stop recruitment to the study as soon as there is sufficient evidence to reach a firm conclusion. In this paper we present the advantages and disadvantages of conducting interim analyses in phase III clinical trials, together with the key steps to enable the successful implementation of sequential methods in this setting. Examples are given of completed trials, which have been carried out sequentially, and references to relevant literature and software are provided.
Resumo:
This investigation examines metal release from freshwater sediment using sequential extraction and single-step cold-acid leaching. The concentrations of Cd, Cr, Cu, Fe, Ni, Pb and Zn released using a standard 3-step sequential extraction (Rauret et al., 1999) are compared to those released using a 0.5 M HCl; leach. The results show that the three sediments behave in very different ways when subject to the same leaching experiments: the cold-acid extraction appears to remove higher relative concentrations of metals from the iron-rich sediment than from the other two sediments. Cold-acid extraction appears to be more effective at removing metals from sediments with crystalline iron oxides than the "reducible" step of the sequential extraction. The results show that a single-step acid leach can be just as effective as sequential extractions at removing metals from sediment and are a great deal less time-consuming.
Resumo:
Sorghum (Sorghum bicolor) was grown for 40 days in. rhizocylinder (a growth container which permitted access to rh zosphere and nonrhizosphere soil), in two soils of low P status. Soils were fertilized with different rates of ammonium and nitrate and supplemented with 40 mg phosphorus (P) kg(-1) and inoculated with either Glomus mosseae (Nicol. and Gerd.) or nonmycorrhizal root inoculum.. N-serve (2 mg kg(-1)) was added to prevent nitrification. At harvest, soil from around the roots was collected at distances of 0-5, 5-10, and 10-20 mm from the root core which was 35 mm diameter. Sorghum plants, with and without mycorrhiza, grew larger with NH4+ than with NO3- application. After measuring soil pH, 4 3 suspensions of the same sample were titrated against 0.01 M HCl or 0.01 M NaOH until soil pH reached the nonplanted pH level. The acid or base requirement for each sample was calculated as mmol H+ or OFF kg(-1) soil. The magnitude of liberated acid or base depended on the form and rate of nitrogen and soil type. When the plant root was either uninfected or infected with mycorrhiza., soil pH changes extended up to 5 mm from the root core surface. In both soils, ammonium as an N source resulted in lower soil pH than nitrate. Mycorrhizal (VAM) inoculation did not enhance this difference. In mycorrhizal inoculated soil, P depletion extended tip to 20 mm from the root surface. In non-VAM inoculated soil P depletion extended up to 10 mm from the root surface and remained unchanged at greater distances. In the mycorrhizal inoculated soils, the contribution of the 0-5 mm soil zone to P uptake was greater than the core soil, which reflects the hyphal contribution to P supply. Nitrogen (N) applications that caused acidification increased P uptake because of increased demand; there is no direct evidence that the increased uptake was due to acidity increasing the solubility of P although this may have been a minor effect.
Resumo:
A range of archaeological samples have been examined using FT-IR spectroscopy. These include suspected coprolite samples from the Neolithic site of Catalhoyuk in Turkey, pottery samples from the Roman site of Silchester, UK and the Bronze Age site of Gatas, Spain and unidentified black residues on pottery sherds from the Roman sites of Springhead and Cambourne, UK. For coprolite samples the aim of FT-IR analysis is identification. Identification of coprolites in the field is based on their distinct orange colour; however, such visual identifications can often be misleading due to their similarity with deposits such as ochre and clay. For pottery the aim is to screen those samples that might contain high levels of organic residues which would be suitable for GC-MS analysis. The experiments have shown coprolites to have distinctive spectra, containing strong peaks from calcite, phosphate and quartz; the presence of phosphorus may be confirmed by SEM-EDX analysis. Pottery containing organic residues of plant and animal origin has also been shown to generally display strong phosphate peaks. FT-IR has distinguished between organic resin and non-organic compositions for the black residues, with differences also being seen between organic samples that have the same physical appearance. Further analysis by CC-MS has confirmed the identification of the coprolites through the presence of coprostanol and bile acids, and shows that the majority of organic pottery residues are either fatty acids or mono- or di-acylglycerols from foodstuffs, or triterpenoid resin compounds exposed to high temperatures. One suspected resin sample was shown to contain no organic residues. and it is seen that resin samples with similar physical appearances have different chemical compositions. FT-IR is proposed as a quick and cheap method of screening archaeological samples before subjecting them to the more expensive and time-consuming method of GC-MS. This will eliminate inorganic samples such as clays and ochre from CC-MS analysis, and will screen those samples which are most likely to have a high concentration of preserved organic residues. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A simple and practical technique for assessing the risks, that is, the potential for error, and consequent loss, in software system development, acquired during a requirements engineering phase is described. The technique uses a goal-based requirements analysis as a framework to identify and rate a set of key issues in order to arrive at estimates of the feasibility and adequacy of the requirements. The technique is illustrated and how it has been applied to a real systems development project is shown. How problems in this project could have been identified earlier is shown, thereby avoiding costly additional work and unhappy users.