65 resultados para method of separating variables
em CentAUR: Central Archive University of Reading - UK
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Background: The electroencephalogram (EEG) may be described by a large number of different feature types and automated feature selection methods are needed in order to reliably identify features which correlate with continuous independent variables. New method: A method is presented for the automated identification of features that differentiate two or more groups inneurologicaldatasets basedupona spectraldecompositionofthe feature set. Furthermore, the method is able to identify features that relate to continuous independent variables. Results: The proposed method is first evaluated on synthetic EEG datasets and observed to reliably identify the correct features. The method is then applied to EEG recorded during a music listening task and is observed to automatically identify neural correlates of music tempo changes similar to neural correlates identified in a previous study. Finally,the method is applied to identify neural correlates of music-induced affective states. The identified neural correlates reside primarily over the frontal cortex and are consistent with widely reported neural correlates of emotions. Comparison with existing methods: The proposed method is compared to the state-of-the-art methods of canonical correlation analysis and common spatial patterns, in order to identify features differentiating synthetic event-related potentials of different amplitudes and is observed to exhibit greater performance as the number of unique groups in the dataset increases. Conclusions: The proposed method is able to identify neural correlates of continuous variables in EEG datasets and is shown to outperform canonical correlation analysis and common spatial patterns.
Resumo:
It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A vertical conduction current flows in the atmosphere as a result of the global atmospheric electric circuit. The current at the surface consists of the conduction current and a locally generated displacement current, which are often approximately equal in magnitude. A method of separating the two currents using two collectors of different geometry is investigated. The picoammeters connected to the collectors have a RC time constant of approximately 3 s, permitting the investigation of higher frequency air-earth current changes than previously achieved. The displacement current component of the air-earth current derived from the instrument agrees with calculations using simultaneous data from a co-located fast response electric field mill. The mean value of the nondisplacement current measured over 9 h was 1.76 +/- 0.002 pA m(-2). (c) 2006 American Institute of Physics.
Resumo:
Six parameters uniquely describe the orbit of a body about the Sun. Given these parameters, it is possible to make predictions of the body's position by solving its equation of motion. The parameters cannot be directly measured, so they must be inferred indirectly by an inversion method which uses measurements of other quantities in combination with the equation of motion. Inverse techniques are valuable tools in many applications where only noisy, incomplete, and indirect observations are available for estimating parameter values. The methodology of the approach is introduced and the Kepler problem is used as a real-world example. (C) 2003 American Association of Physics Teachers.
Resumo:
The existence of inertial steady currents that separate from a coast and meander afterward is investigated. By integrating the zonal momentum equation over a suitable area, it is shown that retroflecting currents cannot be steady in a reduced gravity or in a barotropic model of the ocean. Even friction cannot negate this conclusion. Previous literature on this subject, notably the discrepancy between several articles by Nof and Pichevin on the unsteadiness of retroflecting currents and steady solutions presented in other papers, is critically discussed. For more general separating current systems, a local analysis of the zonal momentum balance shows that given a coastal current with a specific zonal momentum structure, an inertial, steady, separating current is unlikely, and the only analytical solution provided in the literature is shown to be inconsistent. In a basin-wide view of these separating current systems, a scaling analysis reveals that steady separation is impossible when the interior flow is nondissipative (e.g., linear Sverdrup-like). These findings point to the possibility that a large part of the variability in the world’s oceans is due to the separation process rather than to instability of a free jet.
Resumo:
Parameters to be determined in a least squares refinement calculation to fit a set of observed data may sometimes usefully be `predicated' to values obtained from some independent source, such as a theoretical calculation. An algorithm for achieving this in a least squares refinement calculation is described, which leaves the operator in full control of the weight that he may wish to attach to the predicate values of the parameters.
Resumo:
A total of 133 samples (53 fermented unprocessed, 19 fermented processed. 62 urea-treated processed) of whole crop wheat (WCW) and 16 samples (five fermented unprocessed, six fermented processed, five urea-treated processed) of whole crop barley (WCB) were collected from commercial farms over two consecutive years (2003/2004 and 2004/2005). Disruption of the maize grains to increase starch availability was achieved at the point of harvest by processors fitted to the forage harvesters. All samples were subjected to laboratory analysis whilst 50 of the samples (24 front Year 1, 26 front Year 2 all WCW except four WCB in Year 2) were subjected to in vivo digestibility and energy value measurements using mature wether sheep. Urea-treated WCW had higher (P<0.05) pH, and dry matter (DM) and crude protein contents and lower concentrations of fermentation products than fermented WCW. Starch was generally lower in fermented, unprocessed WCW and no effect of crop maturity at harvest (as indicated by DM content) on starch concentrations was seen. Urea-treated WCW had higher (P<0.05) in vivo digestible organic matter contents in the DM (DOMD) in Year 1 although this was not recorded in Year 2. There was a close relationship between the digestibility values of organic matter and gross energy thus aiding the use of DOMD to predict metabolisable energy (ME) content. A wide range of ME values was observed (WCW. 8.7-11.8 MJ/kg DM; WCB 7.9-11.2 MJ/kg DM) with the overall ME/DOMD ratio (ME = 0.0156 DOMD) in line With Studies in other forages. There was no evidence that a separate ME/DOMD relationship was needed for WCB which is helpful for practical application. This ratio and other parameters were affected by year of harvest (P<0.05) highlighting the influence of environmental and Other undefined factors. The variability in the composition and nutritive value of WCW and WCB highlights the need for reliable and accurate evaluation methods to be available to assess the Value of these forages before they are included in diets for dairy cows. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Four multiparous cows with cannulas in the rumen and proximal duodenum were used in early lactation in a 4 x 4 Latin square experiment to investigate the effect of method of application of a fibrolytic enzyme product on digestive processes and milk production. The cows were given ad libitum a total mixed ration (TMR) composed of 57% (dry matter basis) forage (3:1 corn silage:grass silage) and 43% concentrates. The TMR contained (g/kg dry matter): 274 neutral detergent fiber, 295 starch, 180 crude protein. Treatments were TMR alone or TMR with the enzyme product added (2 kg/1000 kg TMR dry matter) either sprayed on the TMR 1 h before the morning feed (TMR-E), sprayed only on the concentrate the day before feeding (Concs-E), or infused into the rumen for 14 h/d (Rumen-E). There Was no significant effect on either feed intake or milk yield but both were highest on TMR-E. Rumen digestibility of dry matter, organic matter, and starch was unaffected by the enzyme. Digestibility of NDF was lowest on TMR-E in the rumen but highest postruminally. Total Tract digestibility was highest on TMR-E for dry matter, organic matter, and starch but treatment differences were nonsignificant for neutral detergent fiber: Corn silage stover retention time in the rumen was reduced by all enzyme treatments but postruminal transit time vas increased so the decline in total tract retention. time with enzymes was not significant. It is suggested that the tendency for enzymes to reduce particle retention time in the rumen may, by reducing the time available for fibrolysis to occur, at least partly explain the variability in the reported responses to enzyme treatment.
Resumo:
Recently, various approaches have been suggested for dose escalation studies based on observations of both undesirable events and evidence of therapeutic benefit. This article concerns a Bayesian approach to dose escalation that requires the user to make numerous design decisions relating to the number of doses to make available, the choice of the prior distribution, the imposition of safety constraints and stopping rules, and the criteria by which the design is to be optimized. Results are presented of a substantial simulation study conducted to investigate the influence of some of these factors on the safety and the accuracy of the procedure with a view toward providing general guidance for investigators conducting such studies. The Bayesian procedures evaluated use logistic regression to model the two responses, which are both assumed to be binary. The simulation study is based on features of a recently completed study of a compound with potential benefit to patients suffering from inflammatory diseases of the lung.
Resumo:
This paper introduces a simple futility design that allows a comparative clinical trial to be stopped due to lack of effect at any of a series of planned interim analyses. Stopping due to apparent benefit is not permitted. The design is for use when any positive claim should be based on the maximum sample size, for example to allow subgroup analyses or the evaluation of safety or secondary efficacy responses. A final frequentist analysis can be performed that is valid for the type of design employed. Here the design is described and its properties are presented. Its advantages and disadvantages relative to the use of stochastic curtailment are discussed. Copyright (C) 2003 John Wiley Sons, Ltd.