23 resultados para Pull-out test
em CentAUR: Central Archive University of Reading - UK
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Resumo:
A unified approach is proposed for sparse kernel data modelling that includes regression and classification as well as probability density function estimation. The orthogonal-least-squares forward selection method based on the leave-one-out test criteria is presented within this unified data-modelling framework to construct sparse kernel models that generalise well. Examples from regression, classification and density estimation applications are used to illustrate the effectiveness of this generic sparse kernel data modelling approach.
Resumo:
Using the classical Parzen window estimate as the target function, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel density estimate with comparable accuracy to that of the full-sample optimised Parzen window density estimate.
Resumo:
An automatic algorithm is derived for constructing kernel density estimates based on a regression approach that directly optimizes generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. Local regularization is incorporated into the density construction process to further enforce sparsity. Examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample Parzen window density estimate.
Resumo:
Using the classical Parzen window (PW) estimate as the desired response, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density (SKD) estimates. The proposed algorithm incrementally minimises a leave-one-out test score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights of the selected sparse model are finally updated using the multiplicative nonnegative quadratic programming algorithm, which ensures the nonnegative and unity constraints for the kernel weights and has the desired ability to reduce the model size further. Except for the kernel width, the proposed method has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Several examples demonstrate the ability of this simple regression-based approach to effectively construct a SKID estimate with comparable accuracy to that of the full-sample optimised PW density estimate. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.
Resumo:
A unified approach is proposed for data modelling that includes supervised regression and classification applications as well as unsupervised probability density function estimation. The orthogonal-least-squares regression based on the leave-one-out test criteria is formulated within this unified data-modelling framework to construct sparse kernel models that generalise well. Examples from regression, classification and density estimation applications are used to illustrate the effectiveness of this generic data-modelling approach for constructing parsimonious kernel models with excellent generalisation capability. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.
Resumo:
Purpose: Malawi’s current extension policy supports pluralism and advocates responsiveness to farmer demand. We investigate whether smallholder farmers’ experience supports the assumption that access to multiple service providers leads to extension and advisory services that respond to the needs of farmers. Design/methodology/approach: Within a case study approach, two villages were purposively selected for in-depth qualitative analysis of available services and farmers’ experiences. Focus group discussions were held separately with male and female farmers in each village, followed by semi-structured interviews with 12 key informants selected through snowball sampling. Transcripts were analysed by themes and summaries of themes were made from cross case analysis. Findings: Farmers appreciate having access to a variety of sources of technical advice and enterprise specific technology. However, most service providers continue to dominate and dictate what they will offer. Market access remains a challenge, as providers still emphasize pushing a particular technology to increase farm productivity rather than addressing farmers’ expressed needs. Although farmers work in groups, providers do not seek to strengthen these to enable active interaction and to link them to input and produce markets. This limits farmers’ capacity to continue with innovations after service providers pull out. Poor coordination between providers limits exploitation of potential synergies amongst actors. Practical implications: Services providers can adapt their approach to engage farmers in discussion of their needs and work collaboratively to address them. At a system level, institutions that have a coordination function can play a more dynamic role in brokering interaction between providers and farmers to ensure coverage and responsiveness. Originality/value: The study provides a new farmer perspective on the implementation of extension reforms.
Resumo:
Internationally agreed standard protocols for assessing chemical toxicity of contaminants in soil to worms assume that the test soil does not need to equilibrate with the chemical to be tested prior to the addition of the test organisms and that the chemical will exert any toxic effect upon the test organism within 28 days. Three experiments were carried out to investigate these assumptions. The first experiment was a standard toxicity test where lead nitrate was added to a soil in solution to give a range of concentrations. The mortality of the worms and the concentration of lead in the survivors were determined. The LC(50)s for 14 and 28 days were 5311 and 5395 mug(Pb) g(soil)(-1) respectively. The second experiment was a timed lead accumulation study with worms cultivated in soil containing either 3000 or 5000 mug(Pb) g(soil)(-1). The concentration of lead in the worms was determined at various sampling times. Uptake at so' Sol both concentrations was linear with time. Worms in the 5000 mug g(-1) soil accumulated lead at a faster rate (3.16 mug Pb g(tissue)(-1) day(-1)) tiss than those in the 3000 mug g(-1) soil (2.21 mug Pb-tissue g(-1) day(-1)). The third experiment was a timed experiment with worms cultivated in tiss soil containing 7000 mugPb g(soil)(-1). Soil and lead nitrate solution were mixed and stored at 20 degreesC. Worms were added at various times over a 35-day period. The time to death increased from 23 h, when worms were added directly after the lead was added to the soil, to 67 It when worms were added after the soil had equilibrated with the lead for 35 days. In artificially Pb-amended soils the worms accumulate Pb over the duration of their exposure to the Pb. Thus time limited toxicity tests may be terminated before worm body load has reached a toxic level. This could result in under-estimates of the toxicity of Pb to worms. As the equilibration time of artificially amended Pb-bearing soils increases the bioavailability of Pb decreases. Thus addition of worms shortly after addition of Pb to soils may result in the over-estimate of Pb toxicity to worms. The current OECD acute worm toxicity test fails to take these two phenomena into account thereby reducing the environmental relevance of the contaminant toxicities it is used to calculate. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Ecological risk assessments must increasingly consider the effects of chemical mixtures on the environment as anthropogenic pollution continues to grow in complexity. Yet testing every possible mixture combination is impractical and unfeasible; thus, there is an urgent need for models that can accurately predict mixture toxicity from single-compound data. Currently, two models are frequently used to predict mixture toxicity from single-compound data: Concentration addition and independent action (IA). The accuracy of the predictions generated by these models is currently debated and needs to be resolved before their use in risk assessments can be fully justified. The present study addresses this issue by determining whether the IA model adequately described the toxicity of binary mixtures of five pesticides and other environmental contaminants (cadmium, chlorpyrifos, diuron, nickel, and prochloraz) each with dissimilar modes of action on the reproduction of the nematode Caenorhabditis elegans. In three out of 10 cases, the IA model failed to describe mixture toxicity adequately with significant or antagonism being observed. In a further three cases, there was an indication of synergy, antagonism, and effect-level-dependent deviations, respectively, but these were not statistically significant. The extent of the significant deviations that were found varied, but all were such that the predicted percentage effect seen on reproductive output would have been wrong by 18 to 35% (i.e., the effect concentration expected to cause a 50% effect led to an 85% effect). The presence of such a high number and variety of deviations has important implications for the use of existing mixture toxicity models for risk assessments, especially where all or part of the deviation is synergistic.
Resumo:
Anomalous heavy snow during winter or spring has long been regarded as a possible precursor of deficient Indian monsoon rainfall during the subsequent summer. However previous work in this field is inconclusive, in terms of the mechanism that communicates snow anomalies to the monsoon summer, and even the region from which snow has the most impact. In this study we explore these issues in coupled and atmosphere-only versions of the Hadley Centre model. A 1050-year control integration of the HadCM3 coupled model, which well represents the seasonal cycle of snow cover over the Eurasian continent, is analysed and shows evidence for weakened monsoons being preceded by strong snow forcing (in the absence of ENSO) over either the Himalaya/Tibetan Plateau or north/west Eurasia regions. However, empirical orthogonal function (EOF) analysis of springtime interannual variability in snow depth shows the leading mode to have opposite signs between these two regions, suggesting that competing mechanisms may be possible. To determine the dominant region, ensemble integrations are carried out using HadAM3, the atmospheric component of HadCM3, and a variety of anomalous snow forcing initial conditions obtained from the control integration of the coupled model. Forcings are applied during spring in separate experiments over the Himalaya/Tibetan Plateau and north/west Eurasia regions, in conjunction with climatological SSTs in order to avoid the direct effects of ENSO. With the aid of idealized forcing conditions in sensitivity tests, we demonstrate that forcing from the Himalaya region is dominant in this model via a Blanford-type mechanism involving reduced surface sensible heat and longwave fluxes, reduced heating of the troposphere over the Tibetan Plateau and consequently a reduced meridional tropospheric temperature gradient which weakens the monsoon during early summer. Snow albedo is shown to be key to the mechanism, explaining around 50% of the perturbation in sensible heating over the Tibetan Plateau, and accounting for the majority of cooling through the troposphere.
A refined LEED analysis of water on Ru{0001}: an experimental test of the partial dissociation model
Resumo:
Despite a number of earlier studies which seemed to confirm molecular adsorption of water on close-packed surfaces of late transition metals, new controversy has arisen over a recent theoretical work by Feibelman, according to which partial dissociation occurs on the Ru{0001} surface leading to a mixed (H2O + OH + H) superstructure. Here, we present a refined LEED-IV analysis of the (root3 x root3)R30degrees-D2O-Ru{0001} structure, testing explicitly this new model by Feibelman. Our results favour the model proposed earlier by Held and Menzel assuming intact water molecules with almost coplanar oxygen atoms and out-of-plane hydrogen atoms atop the slightly higher oxygen atoms. The partially dissociated model with an almost identical arrangement of oxygen atoms can, however, not unambiguously be excluded, especially when the single hydrogen atoms are not present in the surface unit cell. In contrast to the earlier LEED-IV analysis, we can, however, clearly exclude a buckled geometry of oxygen atoms.
Resumo:
The spatial distribution of CO2 level in a classroom carried out in previous field work research has demonstrated that there is some evidence of variations in CO2 concentration in a classroom space. Significant fluctuations in CO2 concentration were found at different sampling points depending on the ventilation strategies and environmental conditions prevailing in individual classrooms. However, how these variations are affected by the emitting sources and the room air movement remains unknown. Hence, it was concluded that detailed investigation of the CO2 distribution need to be performed on a smaller scale. As a result, it was decided to use an environmental chamber with various methods and rates of ventilation, for the same internal temperature and heat loads, to study the effect of ventilation strategy and air movement on the distribution of CO2 concentration in a room. The role of human exhalation and its interaction with the plume induced by the body's convective flow and room air movement due to different ventilation strategies were studied in a chamber at the University of Reading. These phenomena are considered to be important in understanding and predicting the flow patterns in a space and how these impact on the distribution of contaminants. This paper attempts to study the CO2 dispersion and distribution at the exhalation zone of two people sitting in a chamber as well as throughout the occupied zone of the chamber. The horizontal and vertical distributions of CO2 were sampled at locations with a probability that CO2 variation is considered high. Although the room size, source location, ventilation rate and location of air supply and extract devices all can have influence on the CO2 distribution, this article gives general guidelines on the optimum positioning of CO2 sensor in a room.