8 resultados para Normal value

em CentAUR: Central Archive University of Reading - UK


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article illustrates that not all statistical software packages are correctly calculating a p-value for the classical F test comparison of two independent Normal variances. This is illustrated with a simple example, and the reasons why are discussed. Eight different software packages are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is increasing concern about soil enrichment with K+ and subsequent potential losses following long-term application of poor quality water to agricultural land. Different models are increasingly being used for predicting or analyzing water flow and chemical transport in soils and groundwater. The convective-dispersive equation (CDE) and the convective log-normal transfer function (CLT) models were fitted to the potassium (K+) leaching data. The CDE and CLT models produced equivalent goodness of fit. Simulated breakthrough curves for a range of CaCl2 concentration based on parameters of 15 mmol l(-1) CaCl2 were characterised by an early peak position associated with higher K+ concentration as the CaCl2 concentration used in leaching experiments decreased. In another method, the parameters estimated from 15 mmol l(-1) CaCl2 solution were used for all other CaCl2 concentrations, and the best value of retardation factor (R) was optimised for each data set. A better prediction was found. With decreasing CaCl2 concentration the value of R is required to be more than that measured (except for 10 mmol l(-1) CaCl2), if the estimated parameters of 15 mmol l(-1) CaCl2 are used. The two models suffer from the fact that they need to be calibrated against a data set, and some of their parameters are not measurable and cannot be determined independently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

in vitro studies were conducted on five sorghum genotypes developed for the dry tropical highland climate of Kenya and which can be fed to ruminants fresh or as silage. The five sorghum genotypes consisted of two normal white mid-rib (WMR) genotypes, coded E1291 and E65181, and three brown-midrib (BMR) genotypes, coded Lan-5, Lan-6 and Lan-12. Whole mature plants (herbage plus grain) and silage made from E 1291 were used in the study. An in vitro manual gas production technique was used to compare the nutritive characteristics of these genotypes for ruminants. These sorghums differed significantly in true organic matter degraded (OMDeg), which ranged from 520 to 678 g/kg after 24 h incubation and 706 to 805 g/kg after 72 h incubation. All the BMR sorghums had a higher degradability than the WMR genotype, E6518, and the silage, with Lan-5 having the highest degradability. Methane produced per g OMDeg ranged from 40.6 to 46.4 mL/g after 24 h incubation and from 53.1 to 62.6 mL/g after 72 h incubation. It was similar for all genotypes after 24 h incubation but Lan-12 had the highest methane production after 72 h incubation. After 24 h and 72 h incubation all the genotypes produced a similar total amount of gas per OMDeg (293 to 309 and 357 to 385 mL/g, respectively) with similar total short chain fatty acid concentrations in the liquid digesta (7.8 to 10.4 and 9.5 to 10.3 mmol, respectively) and acetate to propionate ratios of 2.16 to 2.49 and 2.35 to 2.87, respectively. The sorghums showed great potential as ruminant feed sources in the region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Five lactating dairy cows with a permanent cannula in the rumen were given ( kg DM/d) a normal diet (7.8 concentrates, 5.1 hay) or a low-roughage (LR) diet (11.5 concentrates, 1.2 hay) in two meals daily in a two-period crossover design. Milk fat (g/kg) was severely reduced on diet LR. To measure rates of production of individual volatile fatty acids (VFA) in the rumen, 0.5 mCi 1-C-14-acetic acid, 2-C-14-propionic acid, or 1-C-14-n-butyric acid were infused into the rumen for 22 h at intervals of 2 to 6 d; rumen samples were taken over the last 12 h. To measure rumen volume, we infused Cr-EDTA into the rumen continuously, and polyethylene glycol was injected 2 h before the morning feed. Results were very variable, so volumes measured by rumen emptying were used instead. Net production of propionic acid more than doubled on LR, but acetate and butyrate production was only numerically lower. Net production rates pooled across both diets were significantly related to concentrations for each VFA. Molar proportions of net production were only slightly higher than molar proportions of concentrations for acetate and propionate but were lower for butyrate. The net energy value (MJ/d) of production of the three VFA increased from 89.5 on normal to 109.1 on LR, equivalent to 55 and 64% of digestible energy, respectively. Fully interchanging, three-pool models of VFA C fluxes are presented. It is concluded that net production rates of VFA can be measured in non-steady states without the need to measure rumen volumes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares a number of different extreme value models for determining the value at risk (VaR) of three LIFFE futures contracts. A semi-nonparametric approach is also proposed, where the tail events are modeled using the generalised Pareto distribution, and normal market conditions are captured by the empirical distribution function. The value at risk estimates from this approach are compared with those of standard nonparametric extreme value tail estimation approaches, with a small sample bias-corrected extreme value approach, and with those calculated from bootstrapping the unconditional density and bootstrapping from a GARCH(1,1) model. The results indicate that, for a holdout sample, the proposed semi-nonparametric extreme value approach yields superior results to other methods, but the small sample tail index technique is also accurate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Normal Quantile Transform (NQT) has been used in many hydrological and meteorological applications in order to make the Cumulated Distribution Function (CDF) of the observed, simulated and forecast river discharge, water level or precipitation data Gaussian. It is also the heart of the meta-Gaussian model for assessing the total predictive uncertainty of the Hydrological Uncertainty Processor (HUP) developed by Krzysztofowicz. In the field of geo-statistics this transformation is better known as the Normal-Score Transform. In this paper some possible problems caused by small sample sizes when applying the NQT in flood forecasting systems will be discussed and a novel way to solve the problem will be outlined by combining extreme value analysis and non-parametric regression methods. The method will be illustrated by examples of hydrological stream-flow forecasts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Widespread commercial use of the internet has significantly increased the volume and scope of data being collected by organisations. ‘Big data’ has emerged as a term to encapsulate both the technical and commercial aspects of this growing data collection activity. To date, much of the discussion of big data has centred upon its transformational potential for innovation and efficiency, yet there has been less reflection on its wider implications beyond commercial value creation. This paper builds upon normal accident theory (NAT) to analyse the broader ethical implications of big data. It argues that the strategies behind big data require organisational systems that leave them vulnerable to normal accidents, that is to say some form of accident or disaster that is both unanticipated and inevitable. Whilst NAT has previously focused on the consequences of physical accidents, this paper suggests a new form of system accident that we label data accidents. These have distinct, less tangible and more complex characteristics and raise significant questions over the role of individual privacy in a ‘data society’. The paper concludes by considering the ways in which the risks of such data accidents might be managed or mitigated.