69 resultados para Error correction coding


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A plasmid DNA directing transcription of the infectious full-length RNA genome of Kunjin (KUN) virus in vivo from a mammalian expression promoter was used to vaccinate mice intramuscularly. The KUN viral cDNA encoded in the plasmid contained the mutation in the NS1 protein (Pro-250 to Leu) previously shown to attenuate KUN virus in weanling mice. KUN virus was isolated from the blood of immunized mice 3-4 days after DNA inoculation, demonstrating that infectious RNA was being transcribed in vivo; however, no symptoms of virus-induced disease were observed. By 19 days postimmunization, neutralizing antibody was detected in the serum of immunized animals. On challenge with lethal doses of the virulent New York strain of West Nile (WN) or wild-type KUN virus intracerebrally or intraperitoneally, mice immunized with as little as 0.1-1 mug of KUN plasmid DNA were solidly protected against disease. This finding correlated with neutralization data in vitro showing that serum from KUN DNA-immunized mice neutralized KUN and WN,viruses with similar efficiencies. The results demonstrate that delivery of an attenuated but replicating KUN virus via a plasmid DNA vector may provide an effective vaccination strategy against virulent strains of WN virus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To develop a 'quality use of medicines' coding system for the assessment of pharmacists' medication reviews and to apply it to an appropriate cohort. Method: A 'quality use of medicines' coding system was developed based on findings in the literature. These codes were then applied to 216 (111 intervention, 105 control) veterans' medication profiles by an independent clinical pharmacist who was supported by a clinical pharmacologist with the aim to assess the appropriateness of pharmacy interventions. The profiles were provided for veterans participating in a randomised, controlled trial in private hospitals evaluating the effect of medication review and discharge counselling. The reliability of the coding was tested by two independent clinical pharmacists in a random sample of 23 veterans from the study population. Main outcome measure: Interrater reliability was assessed by applying Cohen's kappa score on aggregated codes. Results: The coding system based on the literature consisted of 19 codes. The results from the three clinical pharmacists suggested that the original coding system had two major problems: (a) a lack of discrimination for certain recommendations e. g. adverse drug reactions, toxicity and mortality may be seen as variations in degree of a single effect and (b) certain codes e. g. essential therapy were in low prevalence. The interrater reliability for an aggregation of all codes into positive, negative and clinically non-significant codes ranged from 0.49-0.58 (good to fair). The interrater reliability increased to 0.72-0.79 (excellent) when all negative codes were excluded. Analysis of the sample of 216 profiles showed that the most prevalent recommendations from the clinical pharmacists were a positive impact in reducing adverse responses (31.9%), an improvement in good clinical pharmacy practice (25.5%) and a positive impact in reducing drug toxicity (11.1%). Most medications were assigned the clinically non-significant code (96.6%). In fact, the interventions led to a statistically significant difference in pharmacist recommendations in the categories; adverse response, toxicity and good clinical pharmacy practice measured by the quality use of medicine coding system. Conclusion: It was possible to use the quality use of medicine coding system to rate the quality and potential health impact of pharmacists' medication reviews, and the system did pick up differences between intervention and control patients. The interrater reliability for the summarised coding system was fair, but a larger sample of medication regimens is needed to assess the non-summarised quality use of medicines coding system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.