899 resultados para Error correction methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vol. 2 has imprint: New York: Printed and published by Isaac Riley, Wall-street, 1807.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two different slug test field methods are conducted in wells completed in a Puget Lowland aquifer and are examined for systematic error resulting from water column displacement techniques. Slug tests using the standard slug rod and the pneumatic method were repeated on the same wells and hydraulic conductivity estimates were calculated according to Bouwer & Rice and Hvorslev before using a non-parametric statistical test for analysis. Practical considerations of performing the tests in real life settings are also considered in the method comparison. Statistical analysis indicates that the slug rod method results in up to 90% larger hydraulic conductivity values than the pneumatic method, with at least a 95% certainty that the error is method related. This confirms the existence of a slug-rod bias in a real world scenario which has previously been demonstrated by others in synthetic aquifers. In addition to more accurate values, the pneumatic method requires less field labor, less decontamination, and provides the ability to control the magnitudes of the initial displacement, making it the superior slug test procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reliability of measurement refers to unsystematic error in observed responses. Investigations of the prevalence of random error in stated estimates of willingness to pay (WTP) are important to an understanding of why tests of validity in CV can fail. However, published reliability studies have tended to adopt empirical methods that have practical and conceptual limitations when applied to WTP responses. This contention is supported in a review of contingent valuation reliability studies that demonstrate important limitations of existing approaches to WTP reliability. It is argued that empirical assessments of the reliability of contingent values may be better dealt with by using multiple indicators to measure the latent WTP distribution. This latent variable approach is demonstrated with data obtained from a WTP study for stormwater pollution abatement. Attitude variables were employed as a way of assessing the reliability of open-ended WTP (with benchmarked payment cards) for stormwater pollution abatement. The results indicated that participants' decisions to pay were reliably measured, but not the magnitude of the WTP bids. This finding highlights the need to better discern what is actually being measured in VVTP studies, (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A phantom that can be used for mapping geometric distortion in magnetic resonance imaging (MRI) is described. This phantom provides an array of densely distributed control points in three-dimensional (3D) space. These points form the basis of a comprehensive measurement method to correct for geometric distortion in MR images arising principally from gradient field non-linearity and magnet field inhomogeneity. The phantom was designed based on the concept that a point in space can be defined using three orthogonal planes. This novel design approach allows for as many control points as desired. Employing this novel design, a highly accurate method has been developed that enables the positions of the control points to be measured to sub-voxel accuracy. The phantom described in this paper was constructed to fit into a body coil of a MRI scanner, (external dimensions of the phantom were: 310 mm x 310 mm x 310 mm), and it contained 10,830 control points. With this phantom, the mean errors in the measured coordinates of the control points were on the order of 0.1 mm or less, which were less than one tenth of the voxel's dimensions of the phantom image. The calculated three-dimensional distortion map, i.e., the differences between the image positions and true positions of the control points, can then be used to compensate for geometric distortion for a full image restoration. It is anticipated that this novel method will have an impact on the applicability of MRI in both clinical and research settings. especially in areas where geometric accuracy is highly required, such as in MR neuro-imaging. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present the correction of the geometric distortion measured in the clinical magnetic resonance imaging (MRI) systems reported in the preceding paper (Part 1) using a 3D method based on the phantom-mapped geometric distortion data. This method allows the correction to be made on phantom images acquired without or with the vendor correction applied. With the vendor's 2D correction applied, the method corrects for both the residual geometric distortion still present in the plane in which the correction method was applied (the axial plane) and the uncorrected geometric distortion along the axis non-nal to the plane. The evaluation of the effectiveness of the correction using this new method was carried out through analyzing the residual geometric distortion in the corrected phantom images. The results show that the new method can restore the distorted images in 3D nearly to perfection. For all the MRI systems investigated, the mean absolute deviations in the positions of the control points (along x-, y- and z-axes) measured on the corrected phantom images were all less than 0.2 mm. The maximum absolute deviations were all below similar to0.8 mm. As expected, the correction of the phantom images acquired with the vendor's correction applied in the axial plane performed equally well. Both the geometric distortion still present in the axial plane after applying the vendor's correction and the uncorrected distortion along the z-axis have all been restored. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very few empirically validated interventions for improving metacognitive skills (i.e., self-awareness and self-regulation) and functional outcomes have been reported. This single-case experimental study presents JM, a 36-year-old man with a very severe traumatic brain injury (TBI) who demonstrated long-term awareness deficits. Treatment at four years post-injury involved a metacognitive contextual intervention based on a conceptualization of neuro-cognitive, psychological, and socio-environmental factors contributing to his awareness deficits. The 16-week intervention targeted error awareness and self-correction in two real life settings: (a) cooking at home: and (b) volunteer work. Outcome measures included behavioral observation of error behavior and standardized awareness measures. Relative to baseline performance in the cooking setting, JM demonstrated a 44% reduction in error frequency and increased self-correction. Although no spontaneous generalization was evident in the volunteer work setting, specific training in this environment led to a 39% decrease in errors. JM later gained paid employment and received brief metacognitive training in his work environment. JM's global self-knowledge of deficits assessed by self-report was unchanged after the program. Overall, the study provides preliminary support for a metacognitive contextual approach to improve error awareness and functional Outcome in real life settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A field study was performed in a hospital pharmacy aimed at identifying positive and negative influences on the process of detection of and further recovery from initial errors or other failures, thus avoiding negative consequences. Confidential reports and follow-up interviews provided data on 31 near-miss incidents involving such recovery processes. Analysis revealed that organizational culture with regard to following procedures needed reinforcement, that some procedures could be improved, that building in extra checks was worthwhile and that supporting unplanned recovery was essential for problems not covered by procedures. Guidance is given on how performance in recovery could be measured. A case is made for supporting recovery as an addition to prevention-based safety methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of Gallager's error-correcting code is investigated via methods of statistical physics. In this method, the transmitted codeword comprises products of the original message bits selected by two randomly-constructed sparse matrices; the number of non-zero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity is saturated for many of the codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered by employing the TAP approach which is identical to the commonly used belief-propagation-based decoding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We employ the methods presented in the previous chapter for decoding corrupted codewords, encoded using sparse parity check error correcting codes. We show the similarity between the equations derived from the TAP approach and those obtained from belief propagation, and examine their performance as practical decoding methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical physics is employed to evaluate the performance of error-correcting codes in the case of finite message length for an ensemble of Gallager's error correcting codes. We follow Gallager's approach of upper-bounding the average decoding error rate, but invoke the replica method to reproduce the tightest general bound to date, and to improve on the most accurate zero-error noise level threshold reported in the literature. The relation between the methods used and those presented in the information theory literature are explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We employ the methods of statistical physics to study the performance of Gallager type error-correcting codes. In this approach, the transmitted codeword comprises Boolean sums of the original message bits selected by two randomly-constructed sparse matrices. We show that a broad range of these codes potentially saturate Shannon's bound but are limited due to the decoding dynamics used. Other codes show sub-optimal performance but are not restricted by the decoding dynamics. We show how these codes may also be employed as a practical public-key cryptosystem and are of competitive performance to modern cyptographical methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.