896 resultados para Software testing. Test generation. Grammars
Resumo:
An eddy current testing system consists of a multi-sensor probe, computer and a special expansion card and software for data collection and analysis. The probe incorporates an excitation coil, and sensor coils; at least one sensor coil is a lateral current-normal coil and at least one is a current perturbation coil.
Resumo:
We present a method to enhance fault localization for software systems based on a frequent pattern mining algorithm. Our method is based on a large set of test cases for a given set of programs in which faults can be detected. The test executions are recorded as function call trees. Based on test oracles the tests can be classified into successful and failing tests. A frequent pattern mining algorithm is used to identify frequent subtrees in successful and failing test executions. This information is used to rank functions according to their likelihood of containing a fault. The ranking suggests an order in which to examine the functions during fault analysis. We validate our approach experimentally using a subset of Siemens benchmark programs.
Resumo:
While over-dispersion in capture–recapture studies is well known to lead to poor estimation of population size, current diagnostic tools to detect the presence of heterogeneity have not been specifically developed for capture–recapture studies. To address this, a simple and efficient method of testing for over-dispersion in zero-truncated count data is developed and evaluated. The proposed method generalizes an over-dispersion test previously suggested for un-truncated count data and may also be used for testing residual over-dispersion in zero-inflation data. Simulations suggest that the asymptotic distribution of the test statistic is standard normal and that this approximation is also reasonable for small sample sizes. The method is also shown to be more efficient than an existing test for over-dispersion adapted for the capture–recapture setting. Studies with zero-truncated and zero-inflated count data are used to illustrate the test procedures.
Resumo:
To construct Biodiversity richness maps from Environmental Niche Models (ENMs) of thousands of species is time consuming. A separate species occurrence data pre-processing phase enables the experimenter to control test AUC score variance due to species dataset size. Besides, removing duplicate occurrences and points with missing environmental data, we discuss the need for coordinate precision, wide dispersion, temporal and synonymity filters. After species data filtering, the final task of a pre-processing phase should be the automatic generation of species occurrence datasets which can then be directly ’plugged-in’ to the ENM. A software application capable of carrying out all these tasks will be a valuable time-saver particularly for large scale biodiversity studies.
Resumo:
The Organisation for Economic Co-operation and Development (OECD) Terrestrial plant test is often used for the ecological risk assessment of contaminated land. However, its origins in plant protection product testing mean that the species recommended in the OECD guidelines are unlikely to occur on contaminated land. Six alternative species were tested on contaminated soils from a former Zn smelter and a metal fragmentizer with elevated concentrations of Cd, Cu, Pb, and Zn. The response of the alternative species was compared to two species recommended by the OECD; Lolium perenne (perennial ryegrass) and Trifolium pratense (red clover). Urtica dioica (stinging nettle) and Poa annua (annual meadow-grass) had low emergence rates in the control soil so may be considered unsuitable. Festuca rubra (chewings fescue), Holcus lanatus (Yorkshire fog), Senecio vulgaris (common groundsel), and Verbascum thapsus (great mullein) offer good alternatives to the OECD species. In particular, H. lanatus and S. vulgaris were more sensitive to the soils with moderate concentrations of Cd, Cu, Pb, and Zn than the OECD species.
Resumo:
In this paper we review the experimental development of agri-environment measures for use on grasslands. Sward structure has been shown to have a strong influence on birds' ability to forage in grasslands, but the effects of food abundance on foraging behaviour are poorly understood and this hinders development of grassland conservation measures. The experiments described have a dual purpose: to investigate the foraging ecology of birds on grasslands and to test candidate management measures. Most of the work featured focuses on increasing invertebrate food resources during the summer by increasing habitat heterogeneity. We also identify important gaps in the habitats provided by existing or experimental measures, where similar dual-purpose experiments are required.
Resumo:
Conventional seemingly unrelated estimation of the almost ideal demand system is shown to lead to small sample bias and distortions in the size of a Wald test for symmetry and homogeneity when the data are co-integrated. A fully modified estimator is developed in an attempt to remedy these problems. It is shown that this estimator reduces the small sample bias but fails to eliminate the size distortion.. Bootstrapping is shown to be ineffective as a method of removing small sample bias in both the conventional and fully modified estimators. Bootstrapping is effective, however, as a method of removing. size distortion and performs equally well in this respect with both estimators.
Resumo:
The conventional method for assessing acute oral toxicity (OECD Test Guideline 401) was designed to identify the median lethal dose (LD50), using the death of animals as an endpoint. Introduced as an alternative method (OECD Test Guideline 420), the Fixed Dose Procedure (FDP) relies on the observation of clear signs of toxicity, uses fewer animals and causes less suffering. More recently, the Acute Toxic Class method and the Up-and-Down Procedure have also been adopted as OECD test guidelines. Both of these methods also use fewer animals than the conventional method, although they still use death as an endpoint. Each of the three new methods incorporates a sequential dosing procedure, which results in increased efficiency. In 1999, with a view to replacing OECD Test Guideline 401, the OECD requested that the three new test guidelines be updated. This was to bring them in line with the regulatory needs of all OECD Member Countries, provide further reductions in the number of animals used, and introduce refinements to reduce the pain and distress experienced by the animals. This paper describes a statistical modelling approach for the evaluation of acute oral toxicity tests, by using the revised FDP for illustration. Opportunities for further design improvements are discussed.
Resumo:
The conventional method for the assessment of acute dermal toxicity (OECD Test Guideline 402, 1987) uses death of animals as an endpoint to identify the median lethal dose (LD50). A new OECD Testing Guideline called the dermal fixed dose procedure (dermal FDP) is being prepared to provide an alternative to Test Guideline 402. In contrast to Test Guideline 402, the dermal FDP does not provide a point estimate of the LD50, but aims to identify that dose of the substance under investigation that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonised System of Classification and Labelling scheme (GHS). The dermal FDP has been validated using statistical modelling rather than by in vivo testing. The statistical modelling approach enables calculation of the probability of each GHS classification and the expected numbers of deaths and animals used in the test for imaginary substances with a range of LD50 values and dose-response curve slopes. This paper describes the dermal FDP and reports the results from the statistical evaluation. It is shown that the procedure will be completed with considerably less death and suffering than guideline 402, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LD50 value.
Resumo:
This article illustrates that not all statistical software packages are correctly calculating a p-value for the classical F test comparison of two independent Normal variances. This is illustrated with a simple example, and the reasons why are discussed. Eight different software packages are considered.
Resumo:
Assaying a large number of genetic markers from patients in clinical trials is now possible in order to tailor drugs with respect to efficacy. The statistical methodology for analysing such massive data sets is challenging. The most popular type of statistical analysis is to use a univariate test for each genetic marker, once all the data from a clinical study have been collected. This paper presents a sequential method for conducting an omnibus test for detecting gene-drug interactions across the genome, thus allowing informed decisions at the earliest opportunity and overcoming the multiple testing problems from conducting many univariate tests. We first propose an omnibus test for a fixed sample size. This test is based on combining F-statistics that test for an interaction between treatment and the individual single nucleotide polymorphism (SNP). As SNPs tend to be correlated, we use permutations to calculate a global p-value. We extend our omnibus test to the sequential case. In order to control the type I error rate, we propose a sequential method that uses permutations to obtain the stopping boundaries. The results of a simulation study show that the sequential permutation method is more powerful than alternative sequential methods that control the type I error rate, such as the inverse-normal method. The proposed method is flexible as we do not need to assume a mode of inheritance and can also adjust for confounding factors. An application to real clinical data illustrates that the method is computationally feasible for a large number of SNPs. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
Amid the flurry of grant writing and experimentation, statistical analysis sometimes gets less attention than it requires. Here, we describe fully the considerations that should go into the employment of the statistical two-sample t test.
Resumo:
Nested clade phylogeographic analysis (NCPA) is a popular method for reconstructing the demographic history of spatially distributed populations from genetic data. Although some parts of the analysis are automated, there is no unique and widely followed algorithm for doing this in its entirety, beginning with the data, and ending with the inferences drawn from the data. This article describes a method that automates NCPA, thereby providing a framework for replicating analyses in an objective way. To do so, a number of decisions need to be made so that the automated implementation is representative of previous analyses. We review how the NCPA procedure has evolved since its inception and conclude that there is scope for some variability in the manual application of NCPA. We apply the automated software to three published datasets previously analyzed manually and replicate many details of the manual analyses, suggesting that the current algorithm is representative of how a typical user will perform NCPA. We simulate a large number of replicate datasets for geographically distributed, but entirely random-mating, populations. These are then analyzed using the automated NCPA algorithm. Results indicate that NCPA tends to give a high frequency of false positives. In our simulations we observe that 14% of the clades give a conclusive inference that a demographic event has occurred, and that 75% of the datasets have at least one clade that gives such an inference. This is mainly due to the generation of multiple statistics per clade, of which only one is required to be significant to apply the inference key. We survey the inferences that have been made in recent publications and show that the most commonly inferred processes (restricted gene flow with isolation by distance and contiguous range expansion) are those that are commonly inferred in our simulations. However, published datasets typically yield a richer set of inferences with NCPA than obtained in our random-mating simulations, and further testing of NCPA with models of structured populations is necessary to examine its accuracy.
Resumo:
Groundwater is an important resource in the UK, with 45% of public water supplies in the Thames Water region derived from subterranean sources. In urban areas, groundwater has been affected by onthropogenic activities over 0 long period of time and from a multitude of sources, At present, groundwater quality is assessed using a range of chemical species to determine the extent of contamination. However, analysing a complex mixture of chemicals is time-consuming and expensive, whereas the use of an ecotoxicity test provides information on (a) the degree of pollution present in the groundwater and (b) the potential effect of that pollution. Microtox (TM), Eclox (TM) and Daphnia magna microtests were used in conjunction with standard chemical protocols to assess the contamination of groundwaters from sites throughout the London Borough of Hounslow and nearby Heathrow Airport. Because of their precision, range of responses and ease of use, Daphnia magna and Microfox (TM) tests are the bioassays that appear to be most effective for assessing groundwater toxicity However, neither test is ideal because it is also essential to monitor water hardness. Eclox (TM) does not appear to be suitable for use in groundwater-quality assessment in this area, because it is adversely affected by high total dissolved solids and electrical conductivity.
Resumo:
This paper summarizes the design, manufacturing, testing, and finite element analysis (FEA) of glass-fibre-reinforced polyester leaf springs for rail freight vehicles. FEA predictions of load-deflection curves under static loading are presented, together with comparisons with test results. Bending stress distribution at typical load conditions is plotted for the springs. The springs have been mounted on a real wagon and drop tests at tare and full load have been carried out on a purpose-built shaker rig. The transient response of the springs from tests and FEA is presented and discussed.