71 resultados para Statistical testing


Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have undertaken two-dimensional gel electrophoresis proteomic profiling on a series of cell lines with different recombinant antibody production rates. Due to the nature of gel-based experiments not all protein spots are detected across all samples in an experiment, and hence datasets are invariably incomplete. New approaches are therefore required for the analysis of such graduated datasets. We approached this problem in two ways. Firstly, we applied a missing value imputation technique to calculate missing data points. Secondly, we combined a singular value decomposition based hierarchical clustering with the expression variability test to identify protein spots whose expression correlates with increased antibody production. The results have shown that while imputation of missing data was a useful method to improve the statistical analysis of such data sets, this was of limited use in differentiating between the samples investigated, and highlighted a small number of candidate proteins for further investigation. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This field study was a combined chemical and biological investigation of the relative effects of using dispersants to treat oil spills impacting mangrove habitats. The aim of the chemistry was to determine whether dispersant affected the short- or long-term composition of a medium range crude oil (Gippsland) stranded in a tropical mangrove environment in Queensland, Australia. Sediment cores from three replicate plots of each treatment (oil only and oil plus dispersant) were analyzed for total hydrocarbons and for individual molecular markers (alkanes, aromatics, triterpanes, and steranes). Sediments were collected at 2 days, then 1, 7, 13 and 22 months post-spill. Over this time, oil in the six treated plots decreased exponentially from 36.6 +/- 16.5 to 1.2 +/- 0.8 mg/g dry wt. There was no statistical difference in initial oil concentrations, penetration of oil to depth, or in the rates of oil dissipation between oiled or dispersed oil plots. At 13 months, alkanes were >50% degraded, aromatics were similar to 30% degraded based upon ratios of labile to resistant markers. However, there was no change in the triterpane or sterane biomarker signatures of the retained oil. This is of general forensic interest for pollution events. The predominant removal processes were evaporation (less than or equal to 27%) and dissolution (greater than or equal to 56%), with a lag-phase of 1 month before the start of significant microbial degradation (less than or equal to 7%). The most resistant fraction of the oil that remained after 7 months (the higher molecular weight hydrocarbons) correlated with the initial total organic carbon content of the soil. Removal rate in the Queensland mangroves was significantly faster than that observed in the Caribbean and was related to tidal flushing. (C) 1999 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observations of an insect's movement lead to theory on the insect's flight behaviour and the role of movement in the species' population dynamics. This theory leads to predictions of the way the population changes in time under different conditions. If a hypothesis on movement predicts a specific change in the population, then the hypothesis can be tested against observations of population change. Routine pest monitoring of agricultural crops provides a convenient source of data for studying movement into a region and among fields within a region. Examples of the use of statistical and computational methods for testing hypotheses with such data are presented. The types of questions that can be addressed with these methods and the limitations of pest monitoring data when used for this purpose are discussed. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Landscape metrics are widely applied in landscape ecology to quantify landscape structure. However, many are poorly tested and require rigorous validation if they are to serve as reliable indicators of habitat loss and fragmentation, such as Montreal Process Indicator 1.1e. We apply a landscape ecology theory, supported by exploratory and confirmatory statistical techniques, to empirically test landscape metrics for reporting Montreal Process Indicator 1.1e in continuous dry eucalypt forests of sub-tropical Queensland, Australia. Target biota examined included: the Yellow-bellied Glider (Petaurus australis); the diversity of nectar and sap feeding glider species including P. australis, the Sugar Glider P. breviceps, the Squirrel Glider P. norfolcensis, and the Feathertail Glider Acrobates pygmaeus; six diurnal forest birds species; total diurnal bird species diversity; and the density of nectar-feeding diurnal bird species. Two scales of influence were considered: the stand-scale (2 ha), and a series of radial landscape extents (500 m - 2 km; 78 - 1250 ha) surrounding each fauna transect. For all biota, stand-scale structural and compositional attributes were found to be more influential than landscape metrics. For the Yellow-bellied Glider, the proportion of trace habitats with a residual element of old spotted-gum/ironbark eucalypt trees was a significant landscape metric at the 2 km landscape extent. This is a measure of habitat loss rather than habitat fragmentation. For the diversity of nectar and sap feeding glider species, the proportion of trace habitats with a high coefficient of variation in patch size at the 750 m extent was a significant landscape metric. None of the landscape metrics tested was important for diurnal forest birds. We conclude that no single landscape metric adequately captures the response of the region's forest biota per se. This poses a major challenge to regional reporting of Montreal Process Indicator 1.1e, fragmentation of forest types.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: This paper compares four techniques used to assess change in neuropsychological test scores before and after coronary artery bypass graft surgery (CABG), and includes a rationale for the classification of a patient as overall impaired. Methods: A total of 55 patients were tested before and after surgery on the MicroCog neuropsychological test battery. A matched control group underwent the same testing regime to generate test–retest reliabilities and practice effects. Two techniques designed to assess statistical change were used: the Reliable Change Index (RCI), modified for practice, and the Standardised Regression-based (SRB) technique. These were compared against two fixed cutoff techniques (standard deviation and 20% change methods). Results: The incidence of decline across test scores varied markedly depending on which technique was used to describe change. The SRB method identified more patients as declined on most measures. In comparison, the two fixed cutoff techniques displayed relatively reduced sensitivity in the detection of change. Conclusions: Overall change in an individual can be described provided the investigators choose a rational cutoff based on likely spread of scores due to chance. A cutoff value of ≥20% of test scores used provided acceptable probability based on the number of tests commonly encountered. Investigators must also choose a test battery that minimises shared variance among test scores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Testing for simultaneous vicariance across comparative phylogeographic data sets is a notoriously difficult problem hindered by mutational variance, the coalescent variance, and variability across pairs of sister taxa in parameters that affect genetic divergence. We simulate vicariance to characterize the behaviour of several commonly used summary statistics across a range of divergence times, and to characterize this behaviour in comparative phylogeographic datasets having multiple taxon-pairs. We found Tajima's D to be relatively uncorrelated with other summary statistics across divergence times, and using simple hypothesis testing of simultaneous vicariance given variable population sizes, we counter-intuitively found that the variance across taxon pairs in Nei and Li's net nucleotide divergence (pi(net)), a common measure of population divergence, is often inferior to using the variance in Tajima's D across taxon pairs as a test statistic to distinguish ancient simultaneous vicariance from variable vicariance histories. The opposite and more intuitive pattern is found for testing more recent simultaneous vicariance, and overall we found that depending on the timing of vicariance, one of these two test statistics can achieve high statistical power for rejecting simultaneous vicariance, given a reasonable number of intron loci (> 5 loci, 400 bp) and a range of conditions. These results suggest that components of these two composite summary statistics should be used in future simulation-based methods which can simultaneously use a pool of summary statistics to test comparative the phylogeographic hypotheses we consider here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three main models of parameter setting have been proposed: the Variational model proposed by Yang (2002; 2004), the Structured Acquisition model endorsed by Baker (2001; 2005), and the Very Early Parameter Setting (VEPS) model advanced by Wexler (1998). The VEPS model contends that parameters are set early. The Variational model supposes that children employ statistical learning mechanisms to decide among competing parameter values, so this model anticipates delays in parameter setting when critical input is sparse, and gradual setting of parameters. On the Structured Acquisition model, delays occur because parameters form a hierarchy, with higher-level parameters set before lower-level parameters. Assuming that children freely choose the initial value, children sometimes will miss-set parameters. However when that happens, the input is expected to trigger a precipitous rise in one parameter value and a corresponding decline in the other value. We will point to the kind of child language data that is needed in order to adjudicate among these competing models.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports the application of linearly increasing stress testing (LIST) to the study of stress corrosion cracking (SCC) of carbon steel in 4 N NaNO3 and in Bayer liquor. LIST is similar to the constant extension-rate testing (CERT) methodology with the essential difference that the LIST is load controlled whereas the CERT is displacement controlled. The main conclusion is that LIST is suitable for the study of the SCC of carbon steels in 4 N NaNO3 and in Bayer liquor. The low crack velocity in Bayer liquor and a measured maximum stress close to that of the reference specimen in air both indicate that a low applied stress rate is required to study SCC in this system. (C) 1998 Chapman & Hall.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To simulate cropping systems, crop models must not only give reliable predictions of yield across a wide range of environmental conditions, they must also quantify water and nutrient use well, so that the status of the soil at maturity is a good representation of the starting conditions for the next cropping sequence. To assess the suitability for this task a range of crop models, currently used in Australia, were tested. The models differed in their design objectives, complexity and structure and were (i) tested on diverse, independent data sets from a wide range of environments and (ii) model components were further evaluated with one detailed data set from a semi-arid environment. All models were coded into the cropping systems shell APSIM, which provides a common soil water and nitrogen balance. Crop development was input, thus differences between simulations were caused entirely by difference in simulating crop growth. Under nitrogen non-limiting conditions between 73 and 85% of the observed kernel yield variation across environments was explained by the models. This ranged from 51 to 77% under varying nitrogen supply. Water and nitrogen effects on leaf area index were predicted poorly by all models resulting in erroneous predictions of dry matter accumulation and water use. When measured light interception was used as input, most models improved in their prediction of dry matter and yield. This test highlighted a range of compensating errors in all modelling approaches. Time course and final amount of water extraction was simulated well by two models, while others left up to 25% of potentially available soil water in the profile. Kernel nitrogen percentage was predicted poorly by all models due to its sensitivity to small dry matter changes. Yield and dry matter could be estimated adequately for a range of environmental conditions using the general concepts of radiation use efficiency and transpiration efficiency. However, leaf area and kernel nitrogen dynamics need to be improved to achieve better estimates of water and nitrogen use if such models are to be use to evaluate cropping systems. (C) 1998 Elsevier Science B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over half a million heroin misusers receive oral methadone maintenance treatment world-wide1 but the maintenance prescription of injectable opioid drugs, like heroin, remains controversial. In 1992 Switzerland began a large scale evaluation of heroin and other injectable opiate prescribing that eventually involved 1035 misusers. 2 3 The results of the evaluation have recently been reported.4 These show that it was feasible to provide heroin by intravenous injection at a clinic, up to three times a day, for seven days a week. This was done while maintaining good drug control, good order, client safety, and staff morale. Patients were stabilised on 500 to 600 mg heroin daily without evidence of increasing tolerance. Retention in treatment was 89% at six months and 69% at 18 months.4 The self reported use of non-prescribed heroin fell signifianctly, but other drug use was minimally affected. The death rate was 1% per year, and there were no deaths from overdose among participants . . . [Full text of this article]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous work on generating state machines for the purpose of class testing has not been formally based. There has also been work on deriving state machines from formal specifications for testing non-object-oriented software. We build on this work by presenting a method for deriving a state machine for testing purposes from a formal specification of the class under test. We also show how the resulting state machine can be used as the basis for a test suite developed and executed using an existing framework for class testing. To derive the state machine, we identify the states and possible interactions of the operations of the class under test. The Test Template Framework is used to formally derive the states from the Object-Z specification of the class under test. The transitions of the finite state machine are calculated from the derived states and the class's operations. The formally derived finite state machine is transformed to a ClassBench testgraph, which is used as input to the ClassBench framework to test a C++ implementation of the class. The method is illustrated using a simple bounded queue example.