27 resultados para Accuracy and precision
em University of Queensland eSpace - Australia
Resumo:
The results of two experiments are reported that examined how performance in a simple interceptive action (hitting a moving target) was influenced by the speed of the target, the size of the intercepting effector and the distance moved to make the interception. In Experiment 1, target speed and the width of the intercepting manipulandum (bat) were varied. The hypothesis that people make briefer movements, when the temporal accuracy and precision demands of the task are high, predicts that bat width and target speed will divisively interact in their effect on movement time (MT) and that shorter MTs will be associated with a smaller temporal variable error (VE). An alternative hypothesis that people initiate movement when the rate of expansion (ROE) of the target's image reaches a specific, fixed criterion value predicts that bat width will have no effect on MT. The results supported the first hypothesis: a statistically reliable interaction of the predicted form was obtained and the temporal VE was smaller for briefer movements. In Experiment 2, distance to move and target speed were varied. MT increased in direct proportion to distance and there was a divisive interaction between distance and speed; as in Experiment 1, temporal VE was smaller for briefer movements. The pattern of results could not be explained by the strategy of initiating movement at a fixed value of the ROE or at a fixed value of any other perceptual variable potentially available for initiating movement. It is argued that the results support pre-programming of MT with movement initiated when the target's time to arrival at the interception location reaches a criterion value that is matched to the pre-programmed MT. The data supported completely open-loop control when MT was less than between 200 and 240 ms with corrective sub-movements increasingly frequent for movements of longer duration.
Resumo:
Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.
Resumo:
In this paper, we assess the relative performance of the direct valuation method and industry multiplier models using 41 435 firm-quarter Value Line observations over an 11 year (1990–2000) period. Results from both pricingerror and return-prediction analyses indicate that direct valuation yields lower percentage pricing errors and greater return prediction ability than the forward price to aggregated forecasted earnings multiplier model. However, a simple hybrid combination of these two methods leads to more accurate intrinsic value estimates, compared to either method used in isolation. It would appear that fundamental analysis could benefit from using one approach as a check on the other.
Resumo:
An assay using high performance liquid chromatography (HPLC)-electrospray ionization-tandem mass spectrometry (ESI-MS-MS) was developed for simultaneously determining concentrations of morphine, oxycodone, morphine-3-glucuronide, and noroxycodone, in 50 mul samples of rat serum. Deuterated (d(3)) analogues of each compound were used as internal standards. Samples were treated with acetonitrile to precipitate plasma proteins: acetonitrile was removed from the supernatant by centrifugal evaporation before analysis. Limits of quantitation (ng/ml) and their between-day accuracy and precision (%deviation and %CV) were-morphine, 3.8 (4.3% and 7.6%); morphine-3-glucuronide, 5.0 (4.5% and 2.9%); oxycodone, 4.5 (0.4% and 9.3%); noroxycodone, 5.0 (8.5% and 4.6%). (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
QTL detection experiments in livestock species commonly use the half-sib design. Each male is mated to a number of females, each female producing a limited number of progeny. Analysis consists of attempting to detect associations between phenotype and genotype measured on the progeny. When family sizes are limiting experimenters may wish to incorporate as much information as possible into a single analysis. However, combining information across sires is problematic because of incomplete linkage disequilibrium between the markers and the QTL in the population. This study describes formulae for obtaining MLEs via the expectation maximization (EM) algorithm for use in a multiple-trait, multiple-family analysis. A model specifying a QTL with only two alleles, and a common within sire error variance is assumed. Compared to single-family analyses, power can be improved up to fourfold with multi-family analyses. The accuracy and precision of QTL location estimates are also substantially improved. With small family sizes, the multi-family, multi-trait analyses reduce substantially, but not totally remove, biases in QTL effect estimates. In situations where multiple QTL alleles are segregating the multi-family analysis will average out the effects of the different QTL alleles.
Resumo:
Many different methods of reporting animal diets have been used in ecological research. These vary greatly in level of accuracy and precision and therefore complicate attempts to measure and compare diets, and quantitites of nutrients in those diets, across a wide range of taxa. For most birds, the carotenoid content of the diet has not been directly measured. Here, therefore, I use an avian example to show how different methods of measuring the quantities of various foods in the diet affect the relative rankings of higher taxa (families, subfamilies, and tribes), and species within these taxa, with regard to the carotenoid contents of their diets. This is a timely example, as much recent avian literature has focused on the way dietary carotenoids may be traded off among aspects of survival, fitness and signalling. I assessed the mean dietary carotenoid contents of representatives of thirty higher taxa of birds using four different carotenoid intake indices varying in precision, including trophic levels, a coarse-scale and a fine-scale categorical index, and quantitative estimates of dietary carotenoids. This last method was used as the benchmark. For comparisons among taxa, all but the trophic level index were significantly correlated with each other. However, for comparisons of species within taxa, the fine-scale index outperformed the coarse-scale index, which in turn outperformed the trophic level index. In addition, each method has advantages and disadvantages, as well as underlying assumptions that must be considered. Examination and comparison of several possible methods of diet assessment appears to highlight these so that the best possible index is used given available data, and it is recommended that such a step be taken prior to the inclusion of estimated nutrient intake in any statistical analysis. Although applied to avian carotenoids here, this method could readily be applied to other taxa and types of nutrients.
Resumo:
A direct quadrupole ICP-MS technique has been developed for the analysis of the rare earth elements and yttrium in natural waters. The method has been validated by comparison of the results obtained for the river water reference material SLRS-4 with literature values. The detection limit of the technique was investigated by analysis of serial dilutions of SLRS-4 and revealed that single elements can be quantified at single-digit fg/g concentrations. A coherent normalised rare earth pattern was retained at concentrations two orders of magnitude below natural concentrations for SLRS-4, demonstrating the excellent inter-element accuracy and precision of the method. The technique was applied to the analysis of a diluted mid-salinity estuarine sample, which also displayed a coherent normalised rare earth element pattern, yielding the expected distinctive marine characteristics. (c) 2006 Published by Elsevier Ltd.
Resumo:
Background and purpose Survey data quality is a combination of the representativeness of the sample, the accuracy and precision of measurements, data processing and management with several subcomponents in each. The purpose of this paper is to show how, in the final risk factor surveys of the WHO MONICA Project, information on data quality were obtained, quantified, and used in the analysis. Methods and results In the WHO MONICA (Multinational MONItoring of trends and determinants in CArdiovascular disease) Project, the information about the data quality components was documented in retrospective quality assessment reports. On the basis of the documented information and the survey data, the quality of each data component was assessed and summarized using quality scores. The quality scores were used in sensitivity testing of the results both by excluding populations with low quality scores and by weighting the data by its quality scores. Conclusions Detailed documentation of all survey procedures with standardized protocols, training, and quality control are steps towards optimizing data quality. Quantifying data quality is a further step. Methods used in the WHO MONICA Project could be adopted to improve quality in other health surveys.