108 resultados para PROBABILISTIC TELEPORTATION
Resumo:
This paper proposes a probabilistic principal component analysis (PCA) approach applied to islanding detection study based on wide area PMU data. The increasing probability of uncontrolled islanding operation, according to many power system operators, is one of the biggest concerns with a large penetration of distributed renewable generation. The traditional islanding detection methods, such as RoCoF and vector shift, are however extremely sensitive and may result in many unwanted trips. The proposed probabilistic PCA aims to improve islanding detection accuracy and reduce the risk of unwanted tripping based on PMU measurements, while addressing a practical issue on missing data. The reliability and accuracy of the proposed probabilistic PCA approach are demonstrated using real data recorded in the UK power system by the OpenPMU project. The results show that the proposed methods can detect islanding accurately, without being falsely triggered by generation trips, even in the presence of missing values.
Resumo:
This paper provides a summary of our studies on robust speech recognition based on a new statistical approach – the probabilistic union model. We consider speech recognition given that part of the acoustic features may be corrupted by noise. The union model is a method for basing the recognition on the clean part of the features, thereby reducing the effect of the noise on recognition. To this end, the union model is similar to the missing feature method. However, the two methods achieve this end through different routes. The missing feature method usually requires the identity of the noisy data for noise removal, while the union model combines the local features based on the union of random events, to reduce the dependence of the model on information about the noise. We previously investigated the applications of the union model to speech recognition involving unknown partial corruption in frequency band, in time duration, and in feature streams. Additionally, a combination of the union model with conventional noise-reduction techniques was studied, as a means of dealing with a mixture of known or trainable noise and unknown unexpected noise. In this paper, a unified review, in the context of dealing with unknown partial feature corruption, is provided into each of these applications, giving the appropriate theory and implementation algorithms, along with an experimental evaluation.
Resumo:
We study universal quantum computation using optical coherent states. A teleportation scheme for a coherent-state qubit is developed and applied to gate operations. This scheme is shown to be robust to detection inefficiency.
Resumo:
An entangled two-mode coherent state is studied within the framework of 2 x 2-dimensional Hilbert space. An entanglement concentration scheme based on joint Bell-state measurements is worked out. When the entangled coherent state is embedded in vacuum environment, its entanglement is degraded but not totally lost. It is found that the larger the initial coherent amplitude, the faster entanglement decreases. We investigate a scheme to teleport a coherent superposition state while considering a mixed quantum channel. We find that the decohered entangled coherent state may be useless for quantum teleportation as it gives the optimal fidelity of teleportation less than the classical limit 2/3.
Resumo:
We study a continuous-variable entangled state composed of two states which are squeezed in two opposite quadratures in phase space. Various entanglement conditions are tested for the entangled squeezed state and we study decoherence models for noise, producing a mixed entangled squeezed state. We briefly describe a probabilistic protocol for entanglement swapping based on the use of this class of entangled states and the main features of a general generation scheme.
Resumo:
Use of the Dempster-Shafer (D-S) theory of evidence to deal with uncertainty in knowledge-based systems has been widely addressed. Several AI implementations have been undertaken based on the D-S theory of evidence or the extended theory. But the representation of uncertain relationships between evidence and hypothesis groups (heuristic knowledge) is still a major problem. This paper presents an approach to representing such knowledge, in which Yen’s probabilistic multi-set mappings have been extended to evidential mappings, and Shafer’s partition technique is used to get the mass function in a complex evidence space. Then, a new graphic method for describing the knowledge is introduced which is an extension of the graphic model by Lowrance et al. Finally, an extended framework for evidential reasoning systems is specified.
Resumo:
Incidence calculus is a mechanism for probabilistic reasoning in which sets of possible worlds, called incidences, are associated with axioms, and probabilities are then associated with these sets. Inference rules are used to deduce bounds on the incidence of formulae which are not axioms, and bounds for the probability of such a formula can then be obtained. In practice an assignment of probabilities directly to axioms may be given, and it is then necessary to find an assignment of incidence which will reproduce these probabilities. We show that this task of assigning incidences can be viewed as a tree searching problem, and two techniques for performing this research are discussed. One of these is a new proposal involving a depth first search, while the other incorporates a random element. A Prolog implementation of these methods has been developed. The two approaches are compared for efficiency and the significance of their results are discussed. Finally we discuss a new proposal for applying techniques from linear programming to incidence calculus.
Resumo:
Face recognition with unknown, partial distortion and occlusion is a practical problem, and has a wide range of applications, including security and multimedia information retrieval. The authors present a new approach to face recognition subject to unknown, partial distortion and occlusion. The new approach is based on a probabilistic decision-based neural network, enhanced by a statistical method called the posterior union model (PUM). PUM is an approach for ignoring severely mismatched local features and focusing the recognition mainly on the reliable local features. It thereby improves the robustness while assuming no prior information about the corruption. We call the new approach the posterior union decision-based neural network (PUDBNN). The new PUDBNN model has been evaluated on three face image databases (XM2VTS, AT&T and AR) using testing images subjected to various types of simulated and realistic partial distortion and occlusion. The new system has been compared to other approaches and has demonstrated improved performance.
Resumo:
We report the first experimental generation and characterization of a six-photon Dicke state. The produced state shows a fidelity of F=0.56 +/- 0.02 with respect to an ideal Dicke state and violates a witness detecting genuine six-qubit entanglement by 4 standard deviations. We confirm characteristic Dicke properties of our resource and demonstrate its versatility by projecting out four- and five-photon Dicke states, as well as four-photon Greenberger-Horne-Zeilinger and W states. We also show that Dicke states have interesting applications in multiparty quantum networking protocols such as open-destination teleportation, telecloning, and quantum secret sharing.
Resumo:
We present a multimodal detection and tracking algorithm for sensors composed of a camera mounted between two microphones. Target localization is performed on color-based change detection in the video modality and on time difference of arrival (TDOA) estimation between the two microphones in the audio modality. The TDOA is computed by multiband generalized cross correlation (GCC) analysis. The estimated directions of arrival are then postprocessed using a Riccati Kalman filter. The visual and audio estimates are finally integrated, at the likelihood level, into a particle filter (PF) that uses a zero-order motion model, and a weighted probabilistic data association (WPDA) scheme. We demonstrate that the Kalman filtering (KF) improves the accuracy of the audio source localization and that the WPDA helps to enhance the tracking performance of sensor fusion in reverberant scenarios. The combination of multiband GCC, KF, and WPDA within the particle filtering framework improves the performance of the algorithm in noisy scenarios. We also show how the proposed audiovisual tracker summarizes the observed scene by generating metadata that can be transmitted to other network nodes instead of transmitting the raw images and can be used for very low bit rate communication. Moreover, the generated metadata can also be used to detect and monitor events of interest.
Resumo:
Recent evidence suggests that the conjunction fallacy observed in people's probabilistic reasoning is also to be found in their evaluations of inductive argument strength. We presented 130 participants with materials likely to produce a conjunction fallacy either by virtue of a shared categorical or a causal relationship between the categories in the argument. We also took a measure of participants' cognitive ability. We observed conjunction fallacies overall with both sets of materials but found an association with ability for the categorical materials only. Our results have implications for accounts of individual differences in reasoning, for the relevance theory of induction, and for the recent claim that causal knowledge is important in inductive reasoning.