961 resultados para Precision-recall analysis
Resumo:
The duration of movements made to intercept moving targets decreases and movement speed increases when interception requires greater temporal precision. Changes in target size and target speed can have the same effect on required temporal precision, but the response to these changes differs: changes in target speed elicit larger changes in response speed. A possible explanation is that people attempt to strike the target in a central zone that does not vary much with variation in physical target size: the effective size of the target is relatively constant over changes in physical size. Three experiments are reported that test this idea. Participants performed two tasks: (1) strike a moving target with a bat moved perpendicular to the path of the target; (2) press on a force transducer when the target was in a location where it could be struck by the bat. Target speed was varied and target size held constant in experiment 1. Target speed and size were co-varied in experiment 2, keeping the required temporal precision constant. Target size was varied and target speed held constant in experiment 3 to give the same temporal precision as experiment 1. Duration of hitting movements decreased and maximum movement speed increased with increases in target speed and/or temporal precision requirements in all experiments. The effects were largest in experiment 1 and smallest in experiment 3. Analysis of a measure of effective target size (standard deviation of strike locations on the target) failed to support the hypothesis that performance differences could be explained in terms of effective size rather than actual physical size. In the pressing task, participants produced greater peak forces and shorter force pulses when the temporal precision required was greater, showing that the response to increasing temporal precision generalizes to different responses. It is concluded that target size and target speed have independent effects on performance.
Resumo:
QTL detection experiments in livestock species commonly use the half-sib design. Each male is mated to a number of females, each female producing a limited number of progeny. Analysis consists of attempting to detect associations between phenotype and genotype measured on the progeny. When family sizes are limiting experimenters may wish to incorporate as much information as possible into a single analysis. However, combining information across sires is problematic because of incomplete linkage disequilibrium between the markers and the QTL in the population. This study describes formulae for obtaining MLEs via the expectation maximization (EM) algorithm for use in a multiple-trait, multiple-family analysis. A model specifying a QTL with only two alleles, and a common within sire error variance is assumed. Compared to single-family analyses, power can be improved up to fourfold with multi-family analyses. The accuracy and precision of QTL location estimates are also substantially improved. With small family sizes, the multi-family, multi-trait analyses reduce substantially, but not totally remove, biases in QTL effect estimates. In situations where multiple QTL alleles are segregating the multi-family analysis will average out the effects of the different QTL alleles.
Resumo:
Bodies of Ding kiln white porcelains and their imitations from Guantai and Jiexiu kilns of the Chinese Song dynasty (960-1279 AD) were analysed for 40 trace elements by inductively coupled plasma mass spectrometry (ICP-MS). Numerous trace element compositions and ratios allow these visually similar products to be distinguished, and a Ding-style shard of uncertain origin is identified as a likely genuine Ding product. In Jiexiu kiln, Ding-style products have trace element features distinctive from blackwares of an inferior quality intended for the lower end market. Based on geochemical behaviour of these trace elements, we propose that geochemically distinctive raw materials were used for Ding-style products of a higher quality, which possibly also underwent purification by levigation prior to use. Capable of analysing over 40 elements with a typical long term precision of < 2%, this high precision ICP-MS method proves to be very powerful for grouping and characterising archaeological ceramics. Combined with geochemical interpretation, it can provide insights into the raw materials and techniques used by ancient potters. (C) 2004 Elsevier Ltd. All rights reserved.
Bias, precision and heritability of self-reported and clinically measured height in Australian twins
Resumo:
Many studies of quantitative and disease traits in human genetics rely upon self-reported measures. Such measures are based on questionnaires or interviews and are often cheaper and more readily available than alternatives. However, the precision and potential bias cannot usually be assessed. Here we report a detailed quantitative genetic analysis of stature. We characterise the degree of measurement error by utilising a large sample of Australian twin pairs (857 MZ, 815 DZ) with both clinical and self-reported measures of height. Self-report height measurements are shown to be more variable than clinical measures. This has led to lowered estimates of heritability in many previous studies of stature. In our twin sample the heritability estimate for clinical height exceeded 90%. Repeated measures analysis shows that 2-3 times as many self-report measures are required to recover heritability estimates similar to those obtained from clinical measures. Bivariate genetic repeated measures analysis of self-report and clinical height measures showed an additive genetic correlation > 0.98. We show that the accuracy of self-report height is upwardly biased in older individuals and in individuals of short stature. By comparing clinical and self-report measures we also showed that there was a genetic component to females systematically reporting their height incorrectly; this phenomenon appeared to not be present in males. The results from the measurement error analysis were subsequently used to assess the effects of error on the power to detect linkage in a genome scan. Moderate reduction in error (through the use of accurate clinical or multiple self-report measures) increased the effective sample size by 22%; elimination of measurement error led to increases in effective sample size of 41%.
Resumo:
Objective: It is usual that data collected from routine clinical care is sparse and unable to support the more complex pharmacokinetic (PK) models that may have been reported in previous rich data studies. Informative priors may be a pre-requisite for model development. The aim of this study was to estimate the population PK parameters of sirolimus using a fully Bayesian approach with informative priors. Methods: Informative priors including prior mean and precision of the prior mean were elicited from previous published studies using a meta-analytic technique. Precision of between-subject variability was determined by simulations from a Wishart distribution using MATLAB (version 6.5). Concentration-time data of sirolimus retrospectively collected from kidney transplant patients were analysed using WinBUGS (version 1.3). The candidate models were either one- or two-compartment with first order absorption and first order elimination. Model discrimination was based on computation of the posterior odds supporting the model. Results: A total of 315 concentration-time points were obtained from 25 patients. Most data were clustered at trough concentrations with range of 1.6 to 77 hours post-dose. Using informative priors, either a one- or two-compartment model could be used to describe the data. When a one-compartment model was applied, information was gained from the data for the value of apparent clearance (CL/F = 18.5 L/h), and apparent volume of distribution (V/F = 1406 L) but no information was gained about the absorption rate constant (ka). When a two-compartment model was fitted to the data, the data were informative about CL/F, apparent inter-compartmental clearance, and apparent volume of distribution of the peripheral compartment (13.2 L/h, 20.8 L/h, and 579 L, respectively). The posterior distribution of the volume distribution of central compartment and ka were the same as priors. The posterior odds for the two-compartment model was 8.1, indicating the data supported the two-compartment model. Conclusion: The use of informative priors supported the choice of a more complex and informative model that would otherwise have not been supported by the sparse data.
Resumo:
We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.
Resumo:
This thesis addresses the kineto-elastodynamic analysis of a four-bar mechanism running at high-speed where all links are assumed to be flexible. First, the mechanism, at static configurations, is considered as structure. Two methods are used to model the system, namely the finite element method (FEM) and the dynamic stiffness method. The natural frequencies and mode shapes at different positions from both methods are calculated and compared. The FEM is used to model the mechanism running at high-speed. The governing equations of motion are derived using Hamilton's principle. The equations obtained are a set of stiff ordinary differential equations with periodic coefficients. A model is developed whereby the FEM and the dynamic stiffness method are used conjointly to provide high-precision results with only one element per link. The principal concern of the mechanism designer is the behaviour of the mechanism at steady-state. Few algorithms have been developed to deliver the steady-state solution without resorting to costly time marching simulation. In this study two algorithms are developed to overcome the limitations of the existing algorithms. The superiority of the new algorithms is demonstrated. The notion of critical speeds is clarified and a distinction is drawn between "critical speeds", where stresses are at a local maximum, and "unstable bands" where the mechanism deflections will grow boundlessly. Floquet theory is used to assess the stability of the system. A simple method to locate the critical speeds is derived. It is shown that the critical speeds of the mechanism coincide with the local maxima of the eigenvalues of the transition matrix with respect to the rotational speed of the mechanism.
Resumo:
Analysis of covariance (ANCOVA) is a useful method of ‘error control’, i.e., it can reduce the size of the error variance in an experimental or observational study. An initial measure obtained before the experiment, which is closely related to the final measurement, is used to adjust the final measurements, thus reducing the error variance. When this method is used to reduce the error term, the X variable must not itself be affected by the experimental treatments, because part of the treatment effect would then also be removed. Hence, the method can only be safely used when X is measured before an experiment. A further limitation of the analysis is that only the linear effect of Y on X is being removed and it is possible that Y could be a curvilinear function of X. A question often raised is whether ANCOVA should be used routinely in experiments rather than a randomized blocks or split-plot design, which may also reduce the error variance. The answer to this question depends on the relative precision of the difference methods with reference to each scenario. Considerable judgment is often required to select the best experimental design and statistical help should be sought at an early stage of an investigation.
Resumo:
We report statistical time-series analysis tools providing improvements in the rapid, precision extraction of discrete state dynamics from time traces of experimental observations of molecular machines. By building physical knowledge and statistical innovations into analysis tools, we provide techniques for estimating discrete state transitions buried in highly correlated molecular noise. We demonstrate the effectiveness of our approach on simulated and real examples of steplike rotation of the bacterial flagellar motor and the F1-ATPase enzyme. We show that our method can clearly identify molecular steps, periodicities and cascaded processes that are too weak for existing algorithms to detect, and can do so much faster than existing algorithms. Our techniques represent a step in the direction toward automated analysis of high-sample-rate, molecular-machine dynamics. Modular, open-source software that implements these techniques is provided.
Resumo:
Although considerable effort has been invested in the measurement of banking efficiency using Data Envelopment Analysis, hardly any empirical research has focused on comparison of banks in Gulf States Countries This paper employs data on Gulf States banking sector for the period 2000-2002 to develop efficiency scores and rankings for both Islamic and conventional banks. We then investigate the productivity change using Malmquist Index and decompose the productivity into technical change and efficiency change. Further, hypothesis testing and statistical precision in the context of nonparametric efficiency and productivity measurement have been used. Specially, cross-country analysis of efficiency and comparisons of efficiencies between Islamic banks and conventional banks have been investigated using Mann-Whitney test.
Resumo:
The best results in the application of computer science systems to automatic translation are obtained in word processing when texts pertain to specific thematic areas, with structures well defined and a concise and limited lexicon. In this article we present a plan of systematic work for the analysis and generation of language applied to the field of pharmaceutical leaflet, a type of document characterized by format rigidity and precision in the use of lexicon. We propose a solution based in the use of one interlingua as language pivot between source and target languages; we are considering Spanish and Arab languages in this case of application.
Resumo:
2000 Mathematics Subject Classification: 62H30
Resumo:
Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
We propose a long range, high precision optical time domain reflectometry (OTDR) based on an all-fiber supercontinuum source. The source simply consists of a CW pump laser with moderate power and a section of fiber, which has a zero dispersion wavelength near the laser's central wavelength. Spectrum and time domain properties of the source are investigated, showing that the source has great capability in nonlinear optics, such as correlation OTDR due to its ultra-wide-band chaotic behavior, and mm-scale spatial resolution is demonstrated. Then we analyze the key factors limiting the operational range of such an OTDR, e. g., integral Rayleigh backscattering and the fiber loss, which degrades the optical signal to noise ratio at the receiver side, and then the guideline for counter-act such signal fading is discussed. Finally, we experimentally demonstrate a correlation OTDR with 100km sensing range and 8.2cm spatial resolution (1.2 million resolved points), as a verification of theoretical analysis.
Resumo:
Parameter design is an experimental design and analysis methodology for developing robust processes and products. Robustness implies insensitivity to noise disturbances. Subtle experimental realities, such as the joint effect of process knowledge and analysis methodology, may affect the effectiveness of parameter design in precision engineering; where the objective is to detect minute variation in product and process performance. In this thesis, approaches to statistical forced-noise design and analysis methodologies were investigated with respect to detecting performance variations. Given a low degree of process knowledge, Taguchi's methodology of signal-to-noise ratio analysis was found to be more suitable in detecting minute performance variations than the classical approach based on polynomial decomposition. Comparison of inner-array noise (IAN) and outer-array noise (OAN) structuring approaches showed that OAN is a more efficient design for precision engineering. ^