8 resultados para A-not-B error
em University of Queensland eSpace - Australia
Resumo:
The relationship between spot volume and variation for all protein spots observed on large format 2D gels when utilising silver stain technology and a model system based on mammalian NSO cell extracts is reported. By running multiple gels we have shown that the reproducibility of data generated in this way is dependent on individual protein spot volumes, which in turn are directly correlated with the coefficient of variation. The coefficients of variation across all observed protein spots were highest for low abundant proteins which are the primary contributors to process error, and lowest for more abundant proteins. Using the relationship between spot volume and coefficient of variation we show it is necessary to calculate variation for individual protein spot volumes. The inherent limitations of silver staining therefore mean that errors in individual protein spot volumes must be considered when assessing significant changes in protein spot volume and not global error. (C) 2003 Elsevier Science (USA). All rights reserved.
Resumo:
The reliability of measurement refers to unsystematic error in observed responses. Investigations of the prevalence of random error in stated estimates of willingness to pay (WTP) are important to an understanding of why tests of validity in CV can fail. However, published reliability studies have tended to adopt empirical methods that have practical and conceptual limitations when applied to WTP responses. This contention is supported in a review of contingent valuation reliability studies that demonstrate important limitations of existing approaches to WTP reliability. It is argued that empirical assessments of the reliability of contingent values may be better dealt with by using multiple indicators to measure the latent WTP distribution. This latent variable approach is demonstrated with data obtained from a WTP study for stormwater pollution abatement. Attitude variables were employed as a way of assessing the reliability of open-ended WTP (with benchmarked payment cards) for stormwater pollution abatement. The results indicated that participants' decisions to pay were reliably measured, but not the magnitude of the WTP bids. This finding highlights the need to better discern what is actually being measured in VVTP studies, (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Paradoxically, while peripheral self-tolerance exists for constitutively presented somatic self Ag, self-peptide recognized in the context of MHC class II has been shown to sensitize T cells for subsequent activation. We have shown that MHC class II(+)CD86(+)CD40(-) DC, which can be generated from bone marrow in the presence of an NF-kappaB inhibitor, and which constitutively populate peripheral tissues and lymphoid organs in naive animals, can induce Ag-specific tolerance. In this study, we show that CD40(-) human monocyte-derived dendritic cells (DC), generated in the presence of an NF-kappaB inhibitor, signal phosphorylation of TCRzeta, but little proliferation or IFN-gamma in vitro. Proliferation is arrested in the G(1)/G(0) phase of the cell cycle. Surprisingly, responding T cells are neither anergic nor regulatory, but are sensitized for subsequent IFN-gamma production. The data indicate that signaling through NF-kappaB determines the capacity of DC to stimulate T cell proliferation. Functionally, NF-kappaB(-)CD40(-)class II+ DC may either tolerize or sensitize T cells. Thus, while CD40(-) DC appear to prime or prepare T cells, the data imply that signals derived from other cells drive the generation either of Ag-specific regulatory or effector cells in vivo.
Resumo:
We demonstrate a quantum error correction scheme that protects against accidental measurement, using a parity encoding where the logical state of a single qubit is encoded into two physical qubits using a nondeterministic photonic controlled-NOT gate. For the single qubit input states vertical bar 0 >, vertical bar 1 >, vertical bar 0 > +/- vertical bar 1 >, and vertical bar 0 > +/- i vertical bar 1 > our encoder produces the appropriate two-qubit encoded state with an average fidelity of 0.88 +/- 0.03 and the single qubit decoded states have an average fidelity of 0.93 +/- 0.05 with the original state. We are able to decode the two-qubit state (up to a bit flip) by performing a measurement on one of the qubits in the logical basis; we find that the 64 one-qubit decoded states arising from 16 real and imaginary single-qubit superposition inputs have an average fidelity of 0.96 +/- 0.03.
Resumo:
Operator quantum error correction is a recently developed theory that provides a generalized and unified framework for active error correction and passive error avoiding schemes. In this Letter, we describe these codes using the stabilizer formalism. This is achieved by adding a gauge group to stabilizer codes that defines an equivalence class between encoded states. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 3 of its 8 stabilizer generators, leading to a simpler decoding procedure and a wider class of logical operations without affecting its essential properties. This opens the path to possible improvements of the error threshold of fault-tolerant quantum computing.