934 resultados para PROBABILISTIC TELEPORTATION
Resumo:
We address the distribution of quantum information among many parties in the presence of noise. In particular, we consider how to optimally send to m receivers the information encoded into an unknown coherent state. On one hand, a local strategy is considered, consisting in a local cloning process followed by direct transmission. On the other hand, a telecloning protocol based on nonlocal quantum correlations is analysed. Both the strategies are optimized to minimize the detrimental effects due to losses and thermal noise during the propagation. The comparison between the local and the nonlocal protocol shows that telecloning is more effective than local cloning for a wide range of noise parameters. Our results indicate that nonlocal strategies can be more robust against noise than local ones, thus being suitable candidates for playing a major role in quantum information networks.
Resumo:
We address the generation, propagation, and application of multipartite continuous variable entanglement in a noisy environment. In particular, we focus our attention on the multimode entangled states achievable by second-order nonlinear crystals-i.e., coherent states of the SU(m,1) group-which provide a generalization of the twin-beam state of a bipartite system. The full inseparability in the ideal case is shown, whereas thresholds for separability are given for the tripartite case in the presence of noise. We find that entanglement of tripartite states is robust against thermal noise, both in the generation process and during propagation. We then consider coherent states of SU(m,1) as a resource for multipartite distribution of quantum information and analyze a specific protocol for telecloning, proving its optimality in the case of symmetric cloning of pure Gaussian states. We show that the proposed protocol also provides the first example of a completely asymmetric 1 -> m telecloning and derive explicitly the optimal relation among the different fidelities of the m clones. The effect of noise in the various stages of the protocol is taken into account, and the fidelities of the clones are analytically obtained as a function of the noise parameters. In turn, this permits the optimization of the telecloning protocol, including its adaptive modifications to the noisy environment. In the optimized scheme the clones' fidelity remains maximal even in the presence of losses (in the absence of thermal noise), for propagation times that diverge as the number of modes increases. In the optimization procedure the prominent role played by the location of the entanglement source is analyzed in details. Our results indicate that, when only losses are present, telecloning is a more effective way to distribute quantum information than direct transmission followed by local cloning.
Resumo:
We address the nonlocality of fully inseparable three-mode Gaussian states generated either by bilinear three-mode Hamiltonians or by a sequence of bilinear two-mode Hamiltonians. Two different tests revealing nonlocality are considered, in which the dichotomic Bell operator is represented by the displaced parity and by the pseudospin operator respectively. Three-mode states are also considered as a conditional source of two-mode non-Gaussian states, whose nonlocality properties are analysed. We found that the non-Gaussian character of the conditional states allows violation of Bell's inequalities (by parity and pseudospin tests) stronger than with a conventional twin-beam state. However, the non-Gaussian character is not sufficient to reveal nonlocality through a dichotomized quadrature measurement strategy.
Resumo:
We address the generation of fully inseparable three-mode entangled states of radiation by interlinked nonlinear interactions in chi((2)) media. We show how three-mode entanglement can be used to realize symmetric and asymmetric telecloning machines, which achieve optimal fidelity for coherent states. An experimental implementation involving a single nonlinear crystal in which the two interactions take place simultaneously is suggested. Preliminary experimental results showing the feasibility and the effectiveness of the interaction scheme with a seeded crystal are also presented. (C) 2004 Optical Society of America.
Resumo:
We present a fully-distributed self-healing algorithm DEX, that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders - whose expansion properties hold deterministically - that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network state, can decide which nodes join and leave and at what time, and knows the past random choices made by the algorithm). Previous distributed expander constructions typically provide only probabilistic guarantees on the network expansion which rapidly degrade in a dynamic setting, in particular, the expansion properties can degrade even more rapidly under adversarial insertions and deletions. Our algorithm provides efficient maintenance and incurs a low overhead per insertion/deletion by an adaptive adversary: only O(log n) rounds and O(log n) messages are needed with high probability (n is the number of nodes currently in the network). The algorithm requires only a constant number of topology changes. Moreover, our algorithm allows for an efficient implementation and maintenance of a distributed hash table (DHT) on top of DEX, with only a constant additional overhead. Our results are a step towards implementing efficient self-healing networks that have guaranteed properties (constant bounded degree and expansion) despite dynamic changes.
Resumo:
In collaboration with Airbus-UK, the dimensional growth of small panels while being riveted with stiffeners is investigated. The stiffeners have been fastened to the panels with rivets and it has been observed that during this operation the panels expand in the longitudinal and transverse directions. It has been observed that the growth is variable and the challenge is to control the riveting process to minimize this variability. In this investigation, the assembly of the small panels and longitudinal stiffeners has been simulated using low and high fidelity nonlinear finite element models. The models have been validated against a limited set of experimental measurements; it was found that more accurate predictions of the riveting process are achieved using high fidelity explicit finite element models. Furthermore, through a series of numerical simulations and probabilistic analyses, the manufacturing process control parameters that influence panel growth have been identified. Alternative fastening approaches were examined and it was found that dimensional growth can be controlled by changing the design of the dies used for forming the rivets.
Resumo:
BACKGROUND: Age-related macular degeneration is the most common cause of sight impairment in the UK. In neovascular age-related macular degeneration (nAMD), vision worsens rapidly (over weeks) due to abnormal blood vessels developing that leak fluid and blood at the macula.
OBJECTIVES: To determine the optimal role of optical coherence tomography (OCT) in diagnosing people newly presenting with suspected nAMD and monitoring those previously diagnosed with the disease.
DATA SOURCES: Databases searched: MEDLINE (1946 to March 2013), MEDLINE In-Process & Other Non-Indexed Citations (March 2013), EMBASE (1988 to March 2013), Biosciences Information Service (1995 to March 2013), Science Citation Index (1995 to March 2013), The Cochrane Library (Issue 2 2013), Database of Abstracts of Reviews of Effects (inception to March 2013), Medion (inception to March 2013), Health Technology Assessment database (inception to March 2013).
REVIEW METHODS: Types of studies: direct/indirect studies reporting diagnostic outcomes.
INDEX TEST: time domain optical coherence tomography (TD-OCT) or spectral domain optical coherence tomography (SD-OCT).
COMPARATORS: clinical evaluation, visual acuity, Amsler grid, colour fundus photographs, infrared reflectance, red-free images/blue reflectance, fundus autofluorescence imaging, indocyanine green angiography, preferential hyperacuity perimetry, microperimetry. Reference standard: fundus fluorescein angiography (FFA). Risk of bias was assessed using quality assessment of diagnostic accuracy studies, version 2. Meta-analysis models were fitted using hierarchical summary receiver operating characteristic curves. A Markov model was developed (65-year-old cohort, nAMD prevalence 70%), with nine strategies for diagnosis and/or monitoring, and cost-utility analysis conducted. NHS and Personal Social Services perspective was adopted. Costs (2011/12 prices) and quality-adjusted life-years (QALYs) were discounted (3.5%). Deterministic and probabilistic sensitivity analyses were performed.
RESULTS: In pooled estimates of diagnostic studies (all TD-OCT), sensitivity and specificity [95% confidence interval (CI)] was 88% (46% to 98%) and 78% (64% to 88%) respectively. For monitoring, the pooled sensitivity and specificity (95% CI) was 85% (72% to 93%) and 48% (30% to 67%) respectively. The FFA for diagnosis and nurse-technician-led monitoring strategy had the lowest cost (£39,769; QALYs 10.473) and dominated all others except FFA for diagnosis and ophthalmologist-led monitoring (£44,649; QALYs 10.575; incremental cost-effectiveness ratio £47,768). The least costly strategy had a 46.4% probability of being cost-effective at £30,000 willingness-to-pay threshold.
LIMITATIONS: Very few studies provided sufficient information for inclusion in meta-analyses. Only a few studies reported other tests; for some tests no studies were identified. The modelling was hampered by a lack of data on the diagnostic accuracy of strategies involving several tests.
CONCLUSIONS: Based on a small body of evidence of variable quality, OCT had high sensitivity and moderate specificity for diagnosis, and relatively high sensitivity but low specificity for monitoring. Strategies involving OCT alone for diagnosis and/or monitoring were unlikely to be cost-effective. Further research is required on (i) the performance of SD-OCT compared with FFA, especially for monitoring but also for diagnosis; (ii) the performance of strategies involving combinations/sequences of tests, for diagnosis and monitoring; (iii) the likelihood of active and inactive nAMD becoming inactive or active respectively; and (iv) assessment of treatment-associated utility weights (e.g. decrements), through a preference-based study.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42012001930.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Resumo:
We study the sensitivity of a MAP configuration of a discrete probabilistic graphical model with respect to perturbations of its parameters. These perturbations are global, in the sense that simultaneous perturbations of all the parameters (or any chosen subset of them) are allowed. Our main contribution is an exact algorithm that can check whether the MAP configuration is robust with respect to given perturbations. Its complexity is essentially the same as that of obtaining the MAP configuration itself, so it can be promptly used with minimal effort. We use our algorithm to identify the largest global perturbation that does not induce a change in the MAP configuration, and we successfully apply this robustness measure in two practical scenarios: the prediction of facial action units with posed images and the classification of multiple real public data sets. A strong correlation between the proposed robustness measure and accuracy is verified in both scenarios.
Resumo:
This work presents two new score functions based on the Bayesian Dirichlet equivalent uniform (BDeu) score for learning Bayesian network structures. They consider the sensitivity of BDeu to varying parameters of the Dirichlet prior. The scores take on the most adversary and the most beneficial priors among those within a contamination set around the symmetric one. We build these scores in such way that they are decomposable and can be computed efficiently. Because of that, they can be integrated into any state-of-the-art structure learning method that explores the space of directed acyclic graphs and allows decomposable scores. Empirical results suggest that our scores outperform the standard BDeu score in terms of the likelihood of unseen data and in terms of edge discovery with respect to the true network, at least when the training sample size is small. We discuss the relation between these new scores and the accuracy of inferred models. Moreover, our new criteria can be used to identify the amount of data after which learning is saturated, that is, additional data are of little help to improve the resulting model.
Resumo:
This work proposes an extended version of the well-known tree-augmented naive Bayes (TAN) classifier where the structure learning step is performed without requiring features to be connected to the class. Based on a modification of Edmonds’ algorithm, our structure learning procedure explores a superset of the structures that are considered by TAN, yet achieves global optimality of the learning score function in a very efficient way (quadratic in the number of features, the same complexity as learning TANs). A range of experiments show that we obtain models with better accuracy than TAN and comparable to the accuracy of the state-of-the-art classifier averaged one-dependence estimator.
Resumo:
Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. An algorithm for approximate credal network updating is presented. The problem in its general formulation is a multilinear optimization task, which can be linearized by an appropriate rule for fixing all the local models apart from those of a single variable. This simple idea can be iterated and quickly leads to accurate inferences. A transformation is also derived to reduce decision making in credal networks based on the maximality criterion to updating. The decision task is proved to have the same complexity of standard inference, being NPPP-complete for general credal nets and NP-complete for polytrees. Similar results are derived for the E-admissibility criterion. Numerical experiments confirm a good performance of the method.
Resumo:
Credal networks relax the precise probability requirement of Bayesian networks, enabling a richer representation of uncertainty in the form of closed convex sets of probability measures. The increase in expressiveness comes at the expense of higher computational costs. In this paper, we present a new variable elimination algorithm for exactly computing posterior inferences in extensively specified credal networks, which is empirically shown to outperform a state-of-the-art algorithm. The algorithm is then turned into a provably good approximation scheme, that is, a procedure that for any input is guaranteed to return a solution not worse than the optimum by a given factor. Remarkably, we show that when the networks have bounded treewidth and bounded number of states per variable the approximation algorithm runs in time polynomial in the input size and in the inverse of the error factor, thus being the first known fully polynomial-time approximation scheme for inference in credal networks.
Resumo:
This paper investigates the computation of lower/upper expectations that must cohere with a collection of probabilistic assessments and a collection of judgements of epistemic independence. New algorithms, based on multilinear programming, are presented, both for independence among events and among random variables. Separation properties of graphical models are also investigated.
Resumo:
Credal networks provide a scheme for dealing with imprecise probabilistic models. The inference algorithms often used in credal networks compute the interval of the posterior probability of an event of interest given evidence of the specific kind -- evidence that describe the current state of a set of variables. These algorithms do not perform evidential reasoning in case of the evidence must be processed according to the conditioning rule proposed by RC Jeffrey. This paper describes a procedure to integrate evidence with Jeffrey's rule when performing inferences with credal nets.
Resumo:
Hidden Markov models (HMMs) are widely used models for sequential data. As with other probabilistic graphical models, they require the specification of precise probability values, which can be too restrictive for some domains, especially when data are scarce or costly to acquire. We present a generalized version of HMMs, whose quantification can be done by sets of, instead of single, probability distributions. Our models have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. Efficient inference algorithms are developed to address standard HMM usage such as the computation of likelihoods and most probable explanations. Experiments with real data show that the use of imprecise probabilities leads to more reliable inferences without compromising efficiency.