992 resultados para BIAS CORRECTION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Any bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. The reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scope of this study was to estimate calibrated values for dietary data obtained by the Food Frequency Questionnaire for Adolescents (FFQA) and illustrate the effect of this approach on food consumption data. The adolescents were assessed on two occasions, with an average interval of twelve months. In 2004, 393 adolescents participated, and 289 were then reassessed in 2005. Dietary data obtained by the FFQA were calibrated using the regression coefficients estimated from the average of two 24-hour recalls (24HR) of the subsample. The calibrated values were similar to the the 24HR reference measurement in the subsample. In 2004 and 2005 a significant difference was observed between the average consumption levels of the FFQA before and after calibration for all nutrients. With the use of calibrated data the proportion of schoolchildren who had fiber intake below the recommended level increased. Therefore, it is seen that calibrated data can be used to obtain adjusted associations due to reclassification of subjects within the predetermined categories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Additive and multiplicative models of relative risk were used to measure the effect of cancer misclassification and DS86 random errors on lifetime risk projections in the Life Span Study (LSS) of Hiroshima and Nagasaki atomic bomb survivors. The true number of cancer deaths in each stratum of the cancer mortality cross-classification was estimated using sufficient statistics from the EM algorithm. Average survivor doses in the strata were corrected for DS86 random error ($\sigma$ = 0.45) by use of reduction factors. Poisson regression was used to model the corrected and uncorrected mortality rates with covariates for age at-time-of-bombing, age at-time-of-death and gender. Excess risks were in good agreement with risks in RERF Report 11 (Part 2) and the BEIR-V report. Bias due to DS86 random error typically ranged from $-$15% to $-$30% for both sexes, and all sites and models. The total bias, including diagnostic misclassification, of excess risk of nonleukemia for exposure to 1 Sv from age 18 to 65 under the non-constant relative projection model was $-$37.1% for males and $-$23.3% for females. Total excess risks of leukemia under the relative projection model were biased $-$27.1% for males and $-$43.4% for females. Thus, nonleukemia risks for 1 Sv from ages 18 to 85 (DRREF = 2) increased from 1.91%/Sv to 2.68%/Sv among males and from 3.23%/Sv to 4.02%/Sv among females. Leukemia excess risks increased from 0.87%/Sv to 1.10%/Sv among males and from 0.73%/Sv to 1.04%/Sv among females. Bias was dependent on the gender, site, correction method, exposure profile and projection model considered. Future studies that use LSS data for U.S. nuclear workers may be downwardly biased if lifetime risk projections are not adjusted for random and systematic errors. (Supported by U.S. NRC Grant NRC-04-091-02.) ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks are often deployed in large numbers, over a large geographical region, in order to monitor the phenomena of interest. Sensors used in the sensor networks often suffer from random or systematic errors such as drift and bias. Even if they are calibrated at the time of deployment, they tend to drift as time progresses. Consequently, the progressive manual calibration of such a large-scale sensor network becomes impossible in practice. In this article, we address this challenge by proposing a collaborative framework to automatically detect and correct the drift in order to keep the data collected from these networks reliable. We propose a novel scheme that uses geospatial estimation-based interpolation techniques on measurements from neighboring sensors to collaboratively predict the value of phenomenon being observed. The predicted values are then used iteratively to correct the sensor drift by means of a Kalman filter. Our scheme can be implemented in a centralized as well as distributed manner to detect and correct the drift generated in the sensors. For centralized implementation of our scheme, we compare several krigingand nonkriging-based geospatial estimation techniques in combination with the Kalman filter, and show the superiority of the kriging-based methods in detecting and correcting the drift. To demonstrate the applicability of our distributed approach on a real world application scenario, we implement our algorithm on a network consisting of Wireless Sensor Network (WSN) hardware. We further evaluate single as well as multiple drifting sensor scenarios to show the effectiveness of our algorithm for detecting and correcting drift. Further, we address the issue of high power usage for data transmission among neighboring nodes leading to low network lifetime for the distributed approach by proposing two power saving schemes. Moreover, we compare our algorithm with a blind calibration scheme in the literature and demonstrate its superiority in detecting both linear and nonlinear drifts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Damage measures in securities fraud cases are very imprecise because they are based on security price changes that reflect both the correction of previous misrepresentation and other independent information. Consequently, potential plaintiffs have a valuable “free option” to decide whether or not to file suit, and average damage awards are greater than actual damages, much greater when markets are volatile. The “Private Securities Litigation Reform Act of 1995” was intended to curb abusive litigation and to address the problem of excessive damage awards. Motivated by a misdiagnosis that excess awards are due to temporary price drops, the Act limits damages to the difference between the purchase price and the time-averaged trading price from the release of the corrective information until 90 days later or until the sale of the security, whichever is first. Unfortunately, the Act's modified measure of damages suffers from a more severe free-option problem than did the traditional measure. Also, the Act introduced an additional new option to time the sale of the security; the effects of these options may be mitigated by the impact of the positive drift in stock prices over time, if the time-averaged price is not adjusted for market movements. As a result, the bias can be larger or smaller under the new Act, depending on how severe the free-option problem is. We propose an alternative approach to addressing the issue of excessive damages: courts should adopt a threshold of measured damages below which no damage would be awarded. The threshold would depend on several factors, most notably the volatility of the stock in the period under question. That is, damages will be awarded only if measured damages exceed the threshold, and awards would be capped by the formula presented in the Reform Act.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hospital acquired infections (HAI) are costly but many are avoidable. Evaluating prevention programmes requires data on their costs and benefits. Estimating the actual costs of HAI (a measure of the cost savings due to prevention) is difficult as HAI changes cost by extending patient length of stay, yet, length of stay is a major risk factor for HAI. This endogeneity bias can confound attempts to measure accurately the cost of HAI. We propose a two-stage instrumental variables estimation strategy that explicitly controls for the endogeneity between risk of HAI and length of stay. We find that a 10% reduction in ex ante risk of HAI results in an expected savings of £693 ($US 984).

Relevância:

20.00% 20.00%

Publicador: