985 resultados para Error estimate.
Resumo:
Increasing the mutation rate, mu, of viruses above a threshold, mu(c), has been predicted to trigger a catastrophic loss of viral genetic information and is being explored as a novel intervention strategy. Here, we examine the dynamics of this transition using stochastic simulations mimicking within-host HIV-1 evolution. We find a scaling law governing the characteristic time of the transition: tau approximate to 0.6/(mu - mu(c)). The law is robust to variations in underlying evolutionary forces and presents guidelines for treatment of HIV-1 infection with mutagens. We estimate that many years of treatment would be required before HIV-1 can suffer an error catastrophe.
Resumo:
Este trabajo se encuentra bajo la licencia Creative Commons Attribution 3.0.
Resumo:
The problem motivating this investigation is that of pure axisymmetric torsion of an elastic shell of revolution. The analysis is carried out within the framework of the three-dimensional linear theory of elastic equilibrium for homogeneous, isotropic solids. The objective is the rigorous estimation of errors involved in the use of approximations based on thin shell theory.
The underlying boundary value problem is one of Neumann type for a second order elliptic operator. A systematic procedure for constructing pointwise estimates for the solution and its first derivatives is given for a general class of second-order elliptic boundary-value problems which includes the torsion problem as a special case.
The method used here rests on the construction of “energy inequalities” and on the subsequent deduction of pointwise estimates from the energy inequalities. This method removes certain drawbacks characteristic of pointwise estimates derived in some investigations of related areas.
Special interest is directed towards thin shells of constant thickness. The method enables us to estimate the error involved in a stress analysis in which the exact solution is replaced by an approximate one, and thus provides us with a means of assessing the quality of approximate solutions for axisymmetric torsion of thin shells.
Finally, the results of the present study are applied to the stress analysis of a circular cylindrical shell, and the quality of stress estimates derived here and those from a previous related publication are discussed.
Resumo:
We have formulated a model for analyzing the measurement error in marine survey abundance estimates by using data from parallel surveys (trawl haul or acoustic measurement). The measurement error is defined as the component of the variability that cannot be explained by covariates such as temperature, depth, bottom type, etc. The method presented is general, but we concentrate on bottom trawl catches of cod (Gadus morhua). Catches of cod from 10 parallel trawling experiments in the Barents Sea with a total of 130 paired hauls were used to estimate the measurement error in trawl hauls. Based on the experimental data, the measurement error is fairly constant in size on the logarithmic scale and is independent of location, time, and fish density. Compared with the total variability of the winter and autumn surveys in the Barents Sea, the measurement error is small (approximately 2–5%, on the log scale, in terms of variance of catch per towed distance). Thus, the cod catch rate is a fairly precise measure of fish density at a given site at a given time.
Resumo:
Pritchard, L., Corne, D., Kell, D.B., Rowland, J. & Winson, M. (2005) A general model of error-prone PCR. Journal of Theoretical Biology 234, 497-509.
Resumo:
An analysis is carried out, using the prolate spheroidal wave functions, of certain regularized iterative and noniterative methods previously proposed for the achievement of object restoration (or, equivalently, spectral extrapolation) from noisy image data. The ill-posedness inherent in the problem is treated by means of a regularization parameter, and the analysis shows explicitly how the deleterious effects of the noise are then contained. The error in the object estimate is also assessed, and it is shown that the optimal choice for the regularization parameter depends on the signal-to-noise ratio. Numerical examples are used to demonstrate the performance of both unregularized and regularized procedures and also to show how, in the unregularized case, artefacts can be generated from pure noise. Finally, the relative error in the estimate is calculated as a function of the degree of superresolution demanded for reconstruction problems characterized by low space–bandwidth products.
Resumo:
To estimate the prevalence of refractive error in adults across Europe. Refractive data (mean spherical equivalent) collected between 1990 and 2013 from fifteen population-based cohort and cross-sectional studies of the European Eye Epidemiology (E3) Consortium were combined in a random effects meta-analysis stratified by 5-year age intervals and gender. Participants were excluded if they were identified as having had cataract surgery, retinal detachment, refractive surgery or other factors that might influence refraction. Estimates of refractive error prevalence were obtained including the following classifications: myopia ≤−0.75 diopters (D), high myopia ≤−6D, hyperopia ≥1D and astigmatism ≥1D. Meta-analysis of refractive error was performed for 61,946 individuals from fifteen studies with median age ranging from 44 to 81 and minimal ethnic variation (98 % European ancestry). The age-standardised prevalences (using the 2010 European Standard Population, limited to those ≥25 and <90 years old) were: myopia 30.6 % [95 % confidence interval (CI) 30.4–30.9], high myopia 2.7 % (95 % CI 2.69–2.73), hyperopia 25.2 % (95 % CI 25.0–25.4) and astigmatism 23.9 % (95 % CI 23.7–24.1). Age-specific estimates revealed a high prevalence of myopia in younger participants [47.2 % (CI 41.8–52.5) in 25–29 years-olds]. Refractive error affects just over a half of European adults. The greatest burden of refractive error is due to myopia, with high prevalence rates in young adults. Using the 2010 European population estimates, we estimate there are 227.2 million people with myopia across Europe.
Resumo:
Diagnostic test sensitivity and specificity are probabilistic estimates with far reaching implications for disease control, management and genetic studies. In the absence of 'gold standard' tests, traditional Bayesian latent class models may be used to assess diagnostic test accuracies through the comparison of two or more tests performed on the same groups of individuals. The aim of this study was to extend such models to estimate diagnostic test parameters and true cohort-specific prevalence, using disease surveillance data. The traditional Hui-Walter latent class methodology was extended to allow for features seen in such data, including (i) unrecorded data (i.e. data for a second test available only on a subset of the sampled population) and (ii) cohort-specific sensitivities and specificities. The model was applied with and without the modelling of conditional dependence between tests. The utility of the extended model was demonstrated through application to bovine tuberculosis surveillance data from Northern and the Republic of Ireland. Simulation coupled with re-sampling techniques, demonstrated that the extended model has good predictive power to estimate the diagnostic parameters and true herd-level prevalence from surveillance data. Our methodology can aid in the interpretation of disease surveillance data, and the results can potentially refine disease control strategies.
Resumo:
PURPOSE: To determine the heritability of refractive error and the familial aggregation of myopia in an older population. METHODS: Seven hundred fifty-nine siblings (mean age, 73.4 years) in 241 families were recruited from the Salisbury Eye Evaluation (SEE) Study in eastern Maryland. Refractive error was determined by noncycloplegic subjective refraction (if presenting distance visual acuity was < or =20/40) or lensometry (if best corrected visual acuity was >20/40 with spectacles). Participants were considered plano (refractive error of zero) if uncorrected visual acuity was >20/40. Preoperative refraction from medical records was used for pseudophakic subjects. Heritability of refractive error was calculated with multivariate linear regression and was estimated as twice the residual between-sibling correlation after adjusting for age, gender, and race. Logistic regression models were used to estimate the odds ratio (OR) of myopia, given a myopic sibling relative to having a nonmyopic sibling. RESULTS: The estimated heritability of refractive error was 61% (95% confidence interval [CI]: 34%-88%) in this population. The age-, race-, and sex-adjusted ORs of myopia were 2.65 (95% CI: 1.67-4.19), 2.25 (95% CI: 1.31-3.87), 3.00 (95% CI: 1.56-5.79), and 2.98 (95% CI: 1.51-5.87) for myopia thresholds of -0.50, -1.00, -1.50, and -2.00 D, respectively. Neither race nor gender was significantly associated with an increased risk of myopia. CONCLUSIONS: Refractive error and myopia are highly heritable in this elderly population.
Resumo:
This paper presents a new rate-control algorithm for live video streaming over wireless IP networks, which is based on selective frame discarding. In the proposed mechanism excess 'P' frames are dropped from the output queue at the sender using a congestion estimate based on packet loss statistics obtained from RTCP feedback and from the Data Link (DL) layer. The performance of the algorithm is evaluated through computer simulation. This paper also presents a characterisation of packet losses owing to transmission errors and congestion, which can help in choosing appropriate strategies to maximise the video quality experienced by the end user. Copyright © 2007 Inderscience Enterprises Ltd.
Resumo:
The purpose of the present study was to determine which augmented sensory modality would best develop subjective error-detection capabilities of learners performing a spatial-temporal task when using a touch screen monitor. Participants were required to learn a 5-digit key-pressing task in a goal time of 2550 ms over 100 acquisition trials on a touch screen. Participants were randomized into 1 of 4 groups: 1) visual-feedback (colour change of button when selected), 2) auditory-feedback (click sound when button was selected), 3) visual-auditory feedback (both colour change and click sound when button was selected), and 4) no-feedback (no colour change or click sound when button was selected). Following each trial, participants were required to provide a subjective estimate regarding their performance time in relation to the actual time it took for them complete the 5-digit sequence. A no-KR retention test was conducted approximately 24-hours after the last completed acquisition trial. Results showed that practicing a timing task on a touch screen augmented with both visual and auditory information may have differentially impacted motor skill acquisition such that removal of one or both sources of augmented feedback did not result in a severe detriment to timing performance or error detection capabilities of the learner. The present study reflects the importance of multimodal augmented feedback conditions to maximize cognitive abilities for developing a stronger motor memory for subjective error-detection and correction capabilities.
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression
Resumo:
Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper
Resumo:
Although the potential importance of scattering of long-wave radiation by clouds has been recognised, most studies have concentrated on the impact of high clouds and few estimates of the global impact of scattering have been presented. This study shows that scattering in low clouds has a significant impact on outgoing long-wave radiation (OLR) in regions of marine stratocumulus (-3.5 W m(-2) for overcast conditions) where the column water vapour is relatively low. This corresponds to an enhancement of the greenhouse effect of such clouds by 10%. The near-global impact of scattering on OLR is estimated to be -3.0 W m(-2), with low clouds contributing -0.9 W m(-2), mid-level cloud -0.7 W m(-2) and high clouds -1.4 W m(-2). Although this effect appears small compared to the global mean OLR of 240 W m(-2), it indicates that neglect of scattering will lead to an error in cloud long-wave forcing of about 10% and an error in net cloud forcing of about 20%.