108 resultados para PROBABILISTIC TELEPORTATION


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nonlocal gate operation is based on sharing an ancillary pair of qubits in perfect entanglement. When the ancillary pair is partially entangled, the efficiency of gate operation drops. Using general transformations, we devise probabilistic nonlocal gates, which perform the nonlocal operation conclusively when the ancillary pair is only partially entangled. We show that a controlled purification protocol can be implemented by the probabilistic nonlocal operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose an optimal strategy for continuous-variable teleportation in a realistic situation. We show that the typical imperfect quantum operation can be described as a combination of an asymmetrically decohered quantum channel and perfect apparatuses for other operations. For the asymmetrically decohered quantum channel, we find some counterintuitive results: teleportation does not necessarily get better as the channel is initially squeezed more. We show that decoherence-assisted measurement and transformation may enhance fidelity for an asymmetrically mixed quantum channel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum teleportation for continuous variables is generally described in phase space by using the Wigner functions. We study quantum teleportation via a mixed two-mode squeezed state in Hilbert-Schmidt space by using the coherent-state representation and operators. This shows directly how the teleported state is related to the original state.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We formulate a conclusive teleportation protocol for a system in d-dimensional Hilbert space utilizing the positive operator- valued measurement. The conclusive teleportation protocol ensures some perfect teleportation events when the channel is only partially entangled. at the expense of lowering the overall average fidelity. We discuss how much information remains in the inconclusive parts of the teleportation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parallelizing compilers have difficulty analysing and optimising complex code. To address this, some analysis may be delayed until run-time, and techniques such as speculative execution used. Furthermore, to enhance performance, a feedback loop may be setup between the compile time and run-time analysis systems, as in iterative compilation. To extend this, it is proposed that the run-time analysis collects information about the values of variables not already determined, and estimates a probability measure for the sampled values. These measures may be used to guide optimisations in further analyses of the program. To address the problem of variables with measures as values, this paper also presents an outline of a novel combination of previous probabilistic denotational semantics models, applied to a simple imperative language.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the relations between extended incidence calculus and assumption-based truth maintenance systems (ATMSs). We first prove that managing labels for statements (nodes) in an ATMS is equivalent to producing incidence sets of these statements in extended incidence calculus. We then demonstrate that the justification set for a node is functionally equivalent to the implication relation set for the same node in extended incidence calculus. As a consequence, extended incidence calculus can provide justifications for an ATMS, because implication relation sets are discovered by the system automatically. We also show that extended incidence calculus provides a theoretical basis for constructing a probabilistic ATMS by associating proper probability distributions on assumptions. In this way, we can not only produce labels for all nodes in the system, but also calculate the probability of any of such nodes in it. The nogood environments can also be obtained automatically. Therefore, extended incidence calculus and the ATMS are equivalent in carrying out inferences at both the symbolic level and the numerical level. This extends a result due to Laskey and Lehner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool.