965 resultados para Error in substance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Visual analog scales (VAS) are used to assess readiness to changeconstructs, which are often considered critical for change.Objective: We studied whether 3 constructs -readiness to change, importance of changing and confidence inability to change- predict risk status 6 months later in 20 year-old men with either orboth of two behaviors: risky drinking and smoking. Methods: 577 participants in abrief intervention randomized trial were assessed at baseline and 6 months later onalcohol and tobacco consumption and with three 1-10 VAS (readiness, importance,confidence) for each behavior. For each behavior, we used one regression model foreach constructs. Models controlled for receipt of a brief intervention and used thelowest level (1-4) in each construct as the reference group (vs medium (5-7) and high(8-10) levels).Results: Among the 475 risky drinkers, mean (SD) readiness, importance and confidence to change drinking were 4.0 (3.1), 2.8 (2.2) and 7.2 (3.0).Readiness was not associated with being alcohol-risk free 6 months later (OR 1.3[0.7; 2.2] and 1.4 [0.8; 2.6] for medium and high readiness). High importance andhigh confidence were associated with being risk free (OR 0.9 [0.5; 1.8] and 2.9 [1.2;7.5] for medium and high importance; 2.1 [1.0;4.8] and 2.8 [1.5;5.6] for medium andhigh confidence). Among the 320 smokers, mean readiness, importance andconfidence to change smoking were 4.6 (2.6), 5.3 (2.6) and 5.9 (2.6). Neitherreadiness nor importance were associated with being smoking free (OR 2.1 [0.9; 4.7]and 2.1 [0.8; 5.8] for medium and high readiness; 1.4 [0.6; 3.4] and 2.1 [0.8; 5.4] formedium and high importance). High confidence was associated with being smokingfree (OR 2.2 [0.8;6.6] and 3.4 [1.2;9.8] for medium and high confidence).Conclusions: For drinking and smoking, high confidence in ability to change wasassociated -with similar magnitude- with a favorable outcome. This points to thevalue of confidence as an important predictor of successful change.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: To use measurement by cycling power meters (Pmes) to evaluate the accuracy of commonly used models for estimating uphill cycling power (Pest). Experiments were designed to explore the influence of wind speed and steepness of climb on accuracy of Pest. The authors hypothesized that the random error in Pest would be largely influenced by the windy conditions, the bias would be diminished in steeper climbs, and windy conditions would induce larger bias in Pest. METHODS: Sixteen well-trained cyclists performed 15 uphill-cycling trials (range: length 1.3-6.3 km, slope 4.4-10.7%) in a random order. Trials included different riding position in a group (lead or follow) and different wind speeds. Pmes was quantified using a power meter, and Pest was calculated with a methodology used by journalists reporting on the Tour de France. RESULTS: Overall, the difference between Pmes and Pest was -0.95% (95%CI: -10.4%, +8.5%) for all trials and 0.24% (-6.1%, +6.6%) in conditions without wind (<2 m/s). The relationship between percent slope and the error between Pest and Pmes were considered trivial. CONCLUSIONS: Aerodynamic drag (affected by wind velocity and orientation, frontal area, drafting, and speed) is the most confounding factor. The mean estimated values are close to the power-output values measured by power meters, but the random error is between ±6% and ±10%. Moreover, at the power outputs (>400 W) produced by professional riders, this error is likely to be higher. This observation calls into question the validity of releasing individual values without reporting the range of random errors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electronic canopy characterization is an important issue in tree crop management. Ultrasonic and optical sensors are the most used for this purpose. The objective of this work was to assess the performance of an ultrasonic sensor under laboratory and field conditions in order to provide reliable estimations of distance measurements to apple tree canopies. To this purpose, a methodology has been designed to analyze sensor performance in relation to foliage ranging and to interferences with adjacent sensors when working simultaneously. Results show that the average error in distance measurement using the ultrasonic sensor in laboratory conditions is ±0.53 cm. However, the increase of variability in field conditions reduces the accuracy of this kind of sensors when estimating distances to canopies. The average error in such situations is ±5.11 cm. When analyzing interferences of adjacent sensors 30 cm apart, the average error is ±17.46 cm. When sensors are separated 60 cm, the average error is ±9.29 cm. The ultrasonic sensor tested has been proven to be suitable to estimate distances to the canopy in field conditions when sensors are 60 cm apart or more and could, therefore, be used in a system to estimate structural canopy parameters in precision horticulture.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Next-generation sequencing (NGS) technologies have become the standard for data generation in studies of population genomics, as the 1000 Genomes Project (1000G). However, these techniques are known to be problematic when applied to highly polymorphic genomic regions, such as the human leukocyte antigen (HLA) genes. Because accurate genotype calls and allele frequency estimations are crucial to population genomics analyses, it is important to assess the reliability of NGS data. Here, we evaluate the reliability of genotype calls and allele frequency estimates of the single-nucleotide polymorphisms (SNPs) reported by 1000G (phase I) at five HLA genes (HLA-A, -B, -C, -DRB1, and -DQB1). We take advantage of the availability of HLA Sanger sequencing of 930 of the 1092 1000G samples and use this as a gold standard to benchmark the 1000G data. We document that 18.6% of SNP genotype calls in HLA genes are incorrect and that allele frequencies are estimated with an error greater than ±0.1 at approximately 25% of the SNPs in HLA genes. We found a bias toward overestimation of reference allele frequency for the 1000G data, indicating mapping bias is an important cause of error in frequency estimation in this dataset. We provide a list of sites that have poor allele frequency estimates and discuss the outcomes of including those sites in different kinds of analyses. Because the HLA region is the most polymorphic in the human genome, our results provide insights into the challenges of using of NGS data at other genomic regions of high diversity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analytical curves are normally obtained from discrete data by least squares regression. The least squares regression of data involving significant error in both x and y values should not be implemented by ordinary least squares (OLS). In this work, the use of orthogonal distance regression (ODR) is discussed as an alternative approach in order to take into account the error in the x variable. Four examples are presented to illustrate deviation between the results from both regression methods. The examples studied show that, in some situations, ODR coefficients must substitute for those of OLS, and, in other situations, the difference is not significant.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fifty Bursa of Fabricius (BF) were examined by conventional optical microscopy and digital images were acquired and processed using Matlab® 6.5 software. The Artificial Neuronal Network (ANN) was generated using Neuroshell® Classifier software and the optical and digital data were compared. The ANN was able to make a comparable classification of digital and optical scores. The use of ANN was able to classify correctly the majority of the follicles, reaching sensibility and specificity of 89% and 96%, respectively. When the follicles were scored and grouped in a binary fashion the sensibility increased to 90% and obtained the maximum value for the specificity of 92%. These results demonstrate that the use of digital image analysis and ANN is a useful tool for the pathological classification of the BF lymphoid depletion. In addition it provides objective results that allow measuring the dimension of the error in the diagnosis and classification therefore making comparison between databases feasible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work deals with an hybrid PID+fuzzy logic controller applied to control the machine tool biaxial table motions. The non-linear model includes backlash and the axis elasticity. Two PID controllers do the primary table control. A third PID+fuzzy controller has a cross coupled structure whose function is to minimise the trajectory contour errors. Once with the three PID controllers tuned, the system is simulated with and without the third controller. The responses results are plotted and compared to analyse the effectiveness of this hybrid controller over the system. They show that the proposed methodology reduces the contour error in a proportion of 70:1.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The influence of peak-dose drug-induced dyskinesia (DID) on manual tracking (MT) was examined in 10 dyskinetic patients (OPO), and compared to 10 age/gendermatched non-dyskinetic patients (NDPD) and 10 healthy controls. Whole body movement (WBM) and MT were recorded with a 6-degrees of freedom magnetic motion tracker and forearm rotation sensors, respectively. Subjects were asked to match the length of a computer-generated line with a line controlled via wrist rotation. Results show that OPO patients had greater WBM displacement and velocity than other groups. All groups displayed increased WBM from rest to MT, but only DPD and NDPO patients demonstrated a significant increase in WBM displacement and velocity. In addition, OPO patients exhibited excessive increase in WBM suggesting overflow DID. When two distinct target pace segments were examined (FAST/SLOW), all groups had slight increases in WBM displacement and velocity from SLOW to FAST, but only OPO patients showed significantly increased WBM displacement and velocity from SLOW to FAST. Therefore, it can be suggested that overflow DID was further increased with increased task speed. OPO patients also showed significantly greater ERROR matching target velocity, but no significant difference in ERROR in displacement, indicating that significantly greater WBM displacement in the OPO group did not have a direct influence on tracking performance. Individual target and performance traces demonstrated this relatively good tracking performance with the exception of distinct deviations from the target trace that occurred suddenly, followed by quick returns to the target coherent in time with increased performance velocity. In addition, performance hand velocity was not correlated with WBM velocity in DPO patients, suggesting that increased ERROR in velocity was not a direct result of WBM velocity. In conclusion, we propose that over-excitation of motor cortical areas, reported to be present in DPO patients, resulted in overflow DID during voluntary movement. Furthermore, we propose that the increased ERROR in velocity was the result of hypermetric voluntary movements also originating from the over-excitation of motor cortical areas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It has become clear over the last few years that many deterministic dynamical systems described by simple but nonlinear equations with only a few variables can behave in an irregular or random fashion. This phenomenon, commonly called deterministic chaos, is essentially due to the fact that we cannot deal with infinitely precise numbers. In these systems trajectories emerging from nearby initial conditions diverge exponentially as time evolves)and therefore)any small error in the initial measurement spreads with time considerably, leading to unpredictable and chaotic behaviour The thesis work is mainly centered on the asymptotic behaviour of nonlinear and nonintegrable dissipative dynamical systems. It is found that completely deterministic nonlinear differential equations describing such systems can exhibit random or chaotic behaviour. Theoretical studies on this chaotic behaviour can enhance our understanding of various phenomena such as turbulence, nonlinear electronic circuits, erratic behaviour of heart and brain, fundamental molecular reactions involving DNA, meteorological phenomena, fluctuations in the cost of materials and so on. Chaos is studied mainly under two different approaches - the nature of the onset of chaos and the statistical description of the chaotic state.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die vorliegende Arbeit befasst sich mit den Fehlern, die bei der Berechnung von Tragstrukturen auftreten können, dem Diskretisierungs- und dem Modellfehler. Ein zentrales Werkzeug für die Betrachtung des lokalen Fehlers in einer FE-Berechnung sind die Greenschen Funktionen, die auch in anderen Bereichen der Statik, wie man zeigen kann, eine tragende Rolle spielen. Um den richtigen Einsatz der Greenschen Funktion mit der FE-Technik sicherzustellen, werden deren Eigenschaften und die konsistente Generierung aufgezeigt. Mit dem vorgestellten Verfahren, der Lagrange-Methode, wird es möglich auch für nichtlineare Probleme eine Greensche Funktion zu ermitteln. Eine logische Konsequenz aus diesen Betrachtungen ist die Verbesserung der Einflussfunktion durch Verwendung von Grundlösungen. Die Greensche Funktion wird dabei in die Grundlösung und einen regulären Anteil, welcher mittels FE-Technik bestimmt wird, aufgespalten. Mit dieser Methode, hier angewandt auf die Kirchhoff-Platte, erhält man deutlich genauere Ergebnisse als mit der FE-Methode bei einem vergleichbaren Rechenaufwand, wie die numerischen Untersuchungen zeigen. Die Lagrange-Methode bietet einen generellen Zugang zur zweiten Fehlerart, dem Modellfehler, und kann für lineare und nichtlineare Probleme angewandt werden. Auch hierbei übernimmt die Greensche Funktion wieder eine tragende Rolle, um die Auswirkungen von Parameteränderungen auf ausgewählte Zielgrößen betrachten zu können.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three- dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three- dimensional objects, robust implementations of alignmentt interpretation- tree search, and ransformation clustering.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In networks with small buffers, such as optical packet switching based networks, the convolution approach is presented as one of the most accurate method used for the connection admission control. Admission control and resource management have been addressed in other works oriented to bursty traffic and ATM. This paper focuses on heterogeneous traffic in OPS based networks. Using heterogeneous traffic and bufferless networks the enhanced convolution approach is a good solution. However, both methods (CA and ECA) present a high computational cost for high number of connections. Two new mechanisms (UMCA and ISCA) based on Monte Carlo method are proposed to overcome this drawback. Simulation results show that our proposals achieve lower computational cost compared to enhanced convolution approach with an small stochastic error in the probability estimation

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the Radiative Atmospheric Divergence Using ARM Mobile Facility GERB and AMMA Stations (RADAGAST) project we calculate the divergence of radiative flux across the atmosphere by comparing fluxes measured at each end of an atmospheric column above Niamey, in the African Sahel region. The combination of broadband flux measurements from geostationary orbit and the deployment for over 12 months of a comprehensive suite of active and passive instrumentation at the surface eliminates a number of sampling issues that could otherwise affect divergence calculations of this sort. However, one sampling issue that challenges the project is the fact that the surface flux data are essentially measurements made at a point, while the top-of-atmosphere values are taken over a solid angle that corresponds to an area at the surface of some 2500 km2. Variability of cloud cover and aerosol loading in the atmosphere mean that the downwelling fluxes, even when averaged over a day, will not be an exact match to the area-averaged value over that larger area, although we might expect that it is an unbiased estimate thereof. The heterogeneity of the surface, for example, fixed variations in albedo, further means that there is a likely systematic difference in the corresponding upwelling fluxes. In this paper we characterize and quantify this spatial sampling problem. We bound the root-mean-square error in the downwelling fluxes by exploiting a second set of surface flux measurements from a site that was run in parallel with the main deployment. The differences in the two sets of fluxes lead us to an upper bound to the sampling uncertainty, and their correlation leads to another which is probably optimistic as it requires certain other conditions to be met. For the upwelling fluxes we use data products from a number of satellite instruments to characterize the relevant heterogeneities and so estimate the systematic effects that arise from the flux measurements having to be taken at a single point. The sampling uncertainties vary with the season, being higher during the monsoon period. We find that the sampling errors for the daily average flux are small for the shortwave irradiance, generally less than 5 W m−2, under relatively clear skies, but these increase to about 10 W m−2 during the monsoon. For the upwelling fluxes, again taking daily averages, systematic errors are of order 10 W m−2 as a result of albedo variability. The uncertainty on the longwave component of the surface radiation budget is smaller than that on the shortwave component, in all conditions, but a bias of 4 W m−2 is calculated to exist in the surface leaving longwave flux.