92 resultados para Error correction methods
Resumo:
The problem of distributed compression for correlated quantum sources is considered. The classical version of this problem was solved by Slepian and Wolf, who showed that distributed compression could take full advantage of redundancy in the local sources created by the presence of correlations. Here it is shown that, in general, this is not the case for quantum sources, by proving a lower bound on the rate sum for irreducible sources of product states which is stronger than the one given by a naive application of Slepian-Wolf. Nonetheless, strategies taking advantage of correlation do exist for some special classes of quantum sources. For example, Devetak and Winter demonstrated the existence of such a strategy when one of the sources is classical. Optimal nontrivial strategies for a different extreme, sources of Bell states, are presented here. In addition, it is explained how distributed compression is connected to other problems in quantum information theory, including information-disturbance questions, entanglement distillation and quantum error correction.
Resumo:
This paper reinvestigates the energy consumption-GDP growth nexus in a panel error correction model using data on 20 net energy importers and exporters from 1971 to 2002. Among the energy exporters, there was bidirectional causality between economic growth and energy consumption in the developed countries in both the short and long run, while in the developing countries energy consumption stimulates growth only in the short run. The former result is also found for energy importers and the latter result exists only for the developed countries within this category. In addition, compared to the developing countries, the developed countries' elasticity response in terms of economic growth from an increase in energy consumption is larger although its income elasticity is lower and less than unitary. Lastly. the implications for energy policy calling for a more holistic approach are discussed. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.
Resumo:
An investigation was undertaken to test the effectiveness of two procedures for recording boundaries and plot positions for scientific studies on farms on Leyte Island, the Philippines. The accuracy of a Garmin 76 Global Positioning System (GPS) unit and a compass and chain was checked under the same conditions. Tree canopies interfered with the ability of the satellite signal to reach the GPS and therefore the GPS survey was less accurate than the compass and chain survey. Where a high degree of accuracy is required, a compass and chain survey remains the most effective method of surveying land underneath tree canopies, providing operator error is minimised. For a large number of surveys and thus large amounts of data, a GPS is more appropriate than a compass and chain survey because data are easily up-loaded into a Geographic Information System (GIS). However, under dense canopies where satellite signals cannot reach the GPS, it may be necessary to revert to a compass survey or a combination of both methods.
Resumo:
In this work a new approach for designing planar gradient coils is outlined for the use in an existing MRI apparatus. A technique that allows for gradient field corrections inside the diameter-sensitive volume is deliberated. These corrections are brought about by making changes to the wire paths that constitute the coil windings, and hence, is called the path correction method. The existing well-known target held method is used to gauge the performance of a typical gradient coil. The gradient coil design methodology is demonstrated for planar openable gradient coils that can be inserted into an existing MRI apparatus. The path corrected gradient coil is compared to the coil obtained using the target field method. It is shown that using a wire path correction with optimized variables, winding patterns that can deliver high magnetic gradient field strengths and large imaging regions can be obtained.
Resumo:
Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.
Resumo:
The reliability of measurement refers to unsystematic error in observed responses. Investigations of the prevalence of random error in stated estimates of willingness to pay (WTP) are important to an understanding of why tests of validity in CV can fail. However, published reliability studies have tended to adopt empirical methods that have practical and conceptual limitations when applied to WTP responses. This contention is supported in a review of contingent valuation reliability studies that demonstrate important limitations of existing approaches to WTP reliability. It is argued that empirical assessments of the reliability of contingent values may be better dealt with by using multiple indicators to measure the latent WTP distribution. This latent variable approach is demonstrated with data obtained from a WTP study for stormwater pollution abatement. Attitude variables were employed as a way of assessing the reliability of open-ended WTP (with benchmarked payment cards) for stormwater pollution abatement. The results indicated that participants' decisions to pay were reliably measured, but not the magnitude of the WTP bids. This finding highlights the need to better discern what is actually being measured in VVTP studies, (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
A phantom that can be used for mapping geometric distortion in magnetic resonance imaging (MRI) is described. This phantom provides an array of densely distributed control points in three-dimensional (3D) space. These points form the basis of a comprehensive measurement method to correct for geometric distortion in MR images arising principally from gradient field non-linearity and magnet field inhomogeneity. The phantom was designed based on the concept that a point in space can be defined using three orthogonal planes. This novel design approach allows for as many control points as desired. Employing this novel design, a highly accurate method has been developed that enables the positions of the control points to be measured to sub-voxel accuracy. The phantom described in this paper was constructed to fit into a body coil of a MRI scanner, (external dimensions of the phantom were: 310 mm x 310 mm x 310 mm), and it contained 10,830 control points. With this phantom, the mean errors in the measured coordinates of the control points were on the order of 0.1 mm or less, which were less than one tenth of the voxel's dimensions of the phantom image. The calculated three-dimensional distortion map, i.e., the differences between the image positions and true positions of the control points, can then be used to compensate for geometric distortion for a full image restoration. It is anticipated that this novel method will have an impact on the applicability of MRI in both clinical and research settings. especially in areas where geometric accuracy is highly required, such as in MR neuro-imaging. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we present the correction of the geometric distortion measured in the clinical magnetic resonance imaging (MRI) systems reported in the preceding paper (Part 1) using a 3D method based on the phantom-mapped geometric distortion data. This method allows the correction to be made on phantom images acquired without or with the vendor correction applied. With the vendor's 2D correction applied, the method corrects for both the residual geometric distortion still present in the plane in which the correction method was applied (the axial plane) and the uncorrected geometric distortion along the axis non-nal to the plane. The evaluation of the effectiveness of the correction using this new method was carried out through analyzing the residual geometric distortion in the corrected phantom images. The results show that the new method can restore the distorted images in 3D nearly to perfection. For all the MRI systems investigated, the mean absolute deviations in the positions of the control points (along x-, y- and z-axes) measured on the corrected phantom images were all less than 0.2 mm. The maximum absolute deviations were all below similar to0.8 mm. As expected, the correction of the phantom images acquired with the vendor's correction applied in the axial plane performed equally well. Both the geometric distortion still present in the axial plane after applying the vendor's correction and the uncorrected distortion along the z-axis have all been restored. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Very few empirically validated interventions for improving metacognitive skills (i.e., self-awareness and self-regulation) and functional outcomes have been reported. This single-case experimental study presents JM, a 36-year-old man with a very severe traumatic brain injury (TBI) who demonstrated long-term awareness deficits. Treatment at four years post-injury involved a metacognitive contextual intervention based on a conceptualization of neuro-cognitive, psychological, and socio-environmental factors contributing to his awareness deficits. The 16-week intervention targeted error awareness and self-correction in two real life settings: (a) cooking at home: and (b) volunteer work. Outcome measures included behavioral observation of error behavior and standardized awareness measures. Relative to baseline performance in the cooking setting, JM demonstrated a 44% reduction in error frequency and increased self-correction. Although no spontaneous generalization was evident in the volunteer work setting, specific training in this environment led to a 39% decrease in errors. JM later gained paid employment and received brief metacognitive training in his work environment. JM's global self-knowledge of deficits assessed by self-report was unchanged after the program. Overall, the study provides preliminary support for a metacognitive contextual approach to improve error awareness and functional Outcome in real life settings.
Resumo:
A field study was performed in a hospital pharmacy aimed at identifying positive and negative influences on the process of detection of and further recovery from initial errors or other failures, thus avoiding negative consequences. Confidential reports and follow-up interviews provided data on 31 near-miss incidents involving such recovery processes. Analysis revealed that organizational culture with regard to following procedures needed reinforcement, that some procedures could be improved, that building in extra checks was worthwhile and that supporting unplanned recovery was essential for problems not covered by procedures. Guidance is given on how performance in recovery could be measured. A case is made for supporting recovery as an addition to prevention-based safety methods.
Resumo:
This paper critically assesses several loss allocation methods based on the type of competition each method promotes. This understanding assists in determining which method will promote more efficient network operations when implemented in deregulated electricity industries. The methods addressed in this paper include the pro rata [1], proportional sharing [2], loss formula [3], incremental [4], and a new method proposed by the authors of this paper, which is loop-based [5]. These methods are tested on a modified Nordic 32-bus network, where different case studies of different operating points are investigated. The varying results obtained for each allocation method at different operating points make it possible to distinguish methods that promote unhealthy competition from those that encourage better system operation.