8 resultados para Distinguishability
Resumo:
Current methods of understanding microbiome composition and structure rely on accurately estimating the number of distinct species and their relative abundance. Most of these methods require an efficient PCR whose forward and reverse primers bind well to the same, large number of identifiable species, and produce amplicons that are unique. It is therefore not surprising that currently used universal primers designed many years ago are not as efficient and fail to bind to recently cataloged species. We propose an automated general method of designing PCR primer pairs that abide by primer design rules and uses current sequence database as input. Since the method is automated, primers can be designed for targeted microbial species or updated as species are added or deleted from the database. In silico experiments and laboratory experiments confirm the efficacy of the newly designed primers for metagenomics applications.
Resumo:
Contrast enhancement is an image processing technique where the objective is to preprocess the image so that relevant information can be either seen or further processed more reliably. These techniques are typically applied when the image itself or the device used for image reproduction provides poor visibility and distinguishability of different regions of interest inthe image. In most studies, the emphasis is on the visualization of image data,but this human observer biased goal often results to images which are not optimal for automated processing. The main contribution of this study is to express the contrast enhancement as a mapping from N-channel image data to 1-channel gray-level image, and to devise a projection method which results to an image with minimal error to the correct contrast image. The projection, the minimum-error contrast image, possess the optimal contrast between the regions of interest in the image. The method is based on estimation of the probability density distributions of the region values, and it employs Bayesian inference to establish the minimum error projection.
Resumo:
We propose a method to measure real-valued time series irreversibility which combines two different tools: the horizontal visibility algorithm and the Kullback-Leibler divergence. This method maps a time series to a directed network according to a geometric criterion. The degree of irreversibility of the series is then estimated by the Kullback-Leibler divergence (i.e. the distinguishability) between the in and out degree distributions of the associated graph. The method is computationally efficient and does not require any ad hoc symbolization process. We find that the method correctly distinguishes between reversible and irreversible stationary time series, including analytical and numerical studies of its performance for: (i) reversible stochastic processes (uncorrelated and Gaussian linearly correlated), (ii) irreversible stochastic processes (a discrete flashing ratchet in an asymmetric potential), (iii) reversible (conservative) and irreversible (dissipative) chaotic maps, and (iv) dissipative chaotic maps in the presence of noise. Two alternative graph functionals, the degree and the degree-degree distributions, can be used as the Kullback-Leibler divergence argument. The former is simpler and more intuitive and can be used as a benchmark, but in the case of an irreversible process with null net current, the degree-degree distribution has to be considered to identify the irreversible nature of the series
Resumo:
Typically linear optical quantum computing (LOQC) models assume that all input photons are completely indistinguishable. In practice there will inevitably be nonidealities associated with the photons and the experimental setup which will introduce a degree of distinguishability between photons. We consider a nondeterministic optical controlled-NOT gate, a fundamental LOQC gate, and examine the effect of temporal and spectral distinguishability on its operation. We also consider the effect of utilizing nonideal photon counters, which have finite bandwidth and time response.
Resumo:
Photonic quantum-information processing schemes, such as linear optics quantum computing, and other experiments relying on single-photon interference, inherently require complete photon indistinguishability to enable the desired photonic interactions to take place. Mode-mismatch is the dominant cause of photon distinguishability in optical circuits. Here we study the effects of photon wave-packet shape on tolerance against the effects of mode mismatch in linear optical circuits, and show that Gaussian distributed photons with large bandwidth are optimal. The result is general and holds for arbitrary linear optical circuits, including ones which allow for postselection and classical feed forward. Our findings indicate that some single photon sources, frequently cited for their potential application to quantum-information processing, may in fact be suboptimal for such applications.
Resumo:
The problem of distributed compression for correlated quantum sources is considered. The classical version of this problem was solved by Slepian and Wolf, who showed that distributed compression could take full advantage of redundancy in the local sources created by the presence of correlations. Here it is shown that, in general, this is not the case for quantum sources, by proving a lower bound on the rate sum for irreducible sources of product states which is stronger than the one given by a naive application of Slepian-Wolf. Nonetheless, strategies taking advantage of correlation do exist for some special classes of quantum sources. For example, Devetak and Winter demonstrated the existence of such a strategy when one of the sources is classical. Optimal nontrivial strategies for a different extreme, sources of Bell states, are presented here. In addition, it is explained how distributed compression is connected to other problems in quantum information theory, including information-disturbance questions, entanglement distillation and quantum error correction.
Resumo:
One of the most significant challenges facing the development of linear optics quantum computing (LOQC) is mode mismatch, whereby photon distinguishability is introduced within circuits, undermining quantum interference effects. We examine the effects of mode mismatch on the parity (or fusion) gate, the fundamental building block in several recent LOQC schemes. We derive simple error models for the effects of mode mismatch on its operation, and relate these error models to current fault-tolerant-threshold estimates.
Resumo:
In physics, one attempts to infer the rules governing a system given only the results of imperfect measurements. Hence, microscopic theories may be effectively indistinguishable experimentally. We develop an operationally motivated procedure to identify the corresponding equivalence classes of states, and argue that the renormalization group (RG) arises from the inherent ambiguities associated with the classes: one encounters flow parameters as, e.g., a regulator, a scale, or a measure of precision, which specify representatives in a given equivalence class. This provides a unifying framework and reveals the role played by information in renormalization. We validate this idea by showing that it justifies the use of low-momenta n-point functions as statistically relevant observables around a Gaussian hypothesis. These results enable the calculation of distinguishability in quantum field theory. Our methods also provide a way to extend renormalization techniques to effective models which are not based on the usual quantum-field formalism, and elucidates the relationships between various type of RG.