923 resultados para Quantum information theory


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesina indaga en el ámbito de las Tecnologías de la Información sobre los diferentes desarrollos realizados en la interpretación automática de la semántica de textos y su relación con los Sistemas de Recuperación de Información. Partiendo de una revisión bibliográfica selectiva se busca sistematizar la documentación estableciendo de manera evolutiva los principales antecedentes y técnicas, sintetizando los conceptos fundamentales y resaltando los aspectos que justifican la elección de unos u otros procedimientos en la resolución de los problemas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesina indaga en el ámbito de las Tecnologías de la Información sobre los diferentes desarrollos realizados en la interpretación automática de la semántica de textos y su relación con los Sistemas de Recuperación de Información. Partiendo de una revisión bibliográfica selectiva se busca sistematizar la documentación estableciendo de manera evolutiva los principales antecedentes y técnicas, sintetizando los conceptos fundamentales y resaltando los aspectos que justifican la elección de unos u otros procedimientos en la resolución de los problemas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesina indaga en el ámbito de las Tecnologías de la Información sobre los diferentes desarrollos realizados en la interpretación automática de la semántica de textos y su relación con los Sistemas de Recuperación de Información. Partiendo de una revisión bibliográfica selectiva se busca sistematizar la documentación estableciendo de manera evolutiva los principales antecedentes y técnicas, sintetizando los conceptos fundamentales y resaltando los aspectos que justifican la elección de unos u otros procedimientos en la resolución de los problemas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tree-reweighted belief propagation is a message passing method that has certain advantages compared to traditional belief propagation (BP). However, it fails to outperform BP in a consistent manner, does not lend itself well to distributed implementation, and has not been applied to distributions with higher-order interactions. We propose a method called uniformly-reweighted belief propagation that mitigates these drawbacks. After having shown in previous works that this method can substantially outperform BP in distributed inference with pairwise interaction models, in this paper we extend it to higher-order interactions and apply it to LDPC decoding, leading performance gains over BP.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Microarray technique is rather powerful, as it allows to test up thousands of genes at a time, but this produces an overwhelming set of data files containing huge amounts of data, which is quite difficult to pre-process, separate, classify and correlate for interesting conclusions to be extracted. Modern machine learning, data mining and clustering techniques based on information theory, are needed to read and interpret the information contents buried in those large data sets. Independent Component Analysis method can be used to correct the data affected by corruption processes or to filter the uncorrectable one and then clustering methods can group similar genes or classify samples. In this paper a hybrid approach is used to obtain a two way unsupervised clustering for a corrected microarray data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information reconciliation is a crucial procedure in the classical post-processing of quantum key distribution (QKD). Poor reconciliation e?ciency, revealing more information than strictly needed, may compromise the maximum attainable distance, while poor performance of the algorithm limits the practical throughput in a QKD device. Historically, reconciliation has been mainly done using close to minimal information disclosure but heavily interactive procedures, like Cascade, or using less e?cient but also less interactive ?just one message is exchanged? procedures, like the ones based in low-density parity-check (LDPC) codes. The price to pay in the LDPC case is that good e?ciency is only attained for very long codes and in a very narrow range centered around the quantum bit error rate (QBER) that the code was designed to reconcile, thus forcing to have several codes if a broad range of QBER needs to be catered for. Real world implementations of these methods are thus very demanding, either on computational or communication resources or both, to the extent that the last generation of GHz clocked QKD systems are ?nding a bottleneck in the classical part. In order to produce compact, high performance and reliable QKD systems it would be highly desirable to remove these problems. Here we analyse the use of short-length LDPC codes in the information reconciliation context using a low interactivity, blind, protocol that avoids an a priori error rate estimation. We demonstrate that 2×103 bits length LDPC codes are suitable for blind reconciliation. Such codes are of high interest in practice, since they can be used for hardware implementations with very high throughput.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Complex networks have been extensively used in the last decade to characterize and analyze complex systems, and they have been recently proposed as a novel instrument for the analysis of spectra extracted from biological samples. Yet, the high number of measurements composing spectra, and the consequent high computational cost, make a direct network analysis unfeasible. We here present a comparative analysis of three customary feature selection algorithms, including the binning of spectral data and the use of information theory metrics. Such algorithms are compared by assessing the score obtained in a classification task, where healthy subjects and people suffering from different types of cancers should be discriminated. Results indicate that a feature selection strategy based on Mutual Information outperforms the more classical data binning, while allowing a reduction of the dimensionality of the data set in two orders of magnitude

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Alzheimer's disease (AD) is the most common cause of dementia. Over the last few years, a considerable effort has been devoted to exploring new biomarkers. Nevertheless, a better understanding of brain dynamics is still required to optimize therapeutic strategies. In this regard, the characterization of mild cognitive impairment (MCI) is crucial, due to the high conversion rate from MCI to AD. However, only a few studies have focused on the analysis of magnetoencephalographic (MEG) rhythms to characterize AD and MCI. In this study, we assess the ability of several parameters derived from information theory to describe spontaneous MEG activity from 36 AD patients, 18 MCI subjects and 26 controls. Three entropies (Shannon, Tsallis and Rényi entropies), one disequilibrium measure (based on Euclidean distance ED) and three statistical complexities (based on Lopez Ruiz–Mancini–Calbet complexity LMC) were used to estimate the irregularity and statistical complexity of MEG activity. Statistically significant differences between AD patients and controls were obtained with all parameters (p < 0.01). In addition, statistically significant differences between MCI subjects and controls were achieved by ED and LMC (p < 0.05). In order to assess the diagnostic ability of the parameters, a linear discriminant analysis with a leave-one-out cross-validation procedure was applied. The accuracies reached 83.9% and 65.9% to discriminate AD and MCI subjects from controls, respectively. Our findings suggest that MCI subjects exhibit an intermediate pattern of abnormalities between normal aging and AD. Furthermore, the proposed parameters provide a new description of brain dynamics in AD and MCI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ‘traditional’ set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified-easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ?traditional? set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified, easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The security of a passive plug-and-play QKD arrangement in the case of finite (resources) key lengths is analysed. It is assumed that the eavesdropper has full access to the channel so an unknown and untrusted source is assumed. To take into account the security of the BB84 protocol under collective attacks within the framework of quantum adversaries, a full treatment provides the well-known equations for the secure key rate. A numerical simulation keeping a minimum number of initial parameters constant as the total error sought and the number of pulses is carried out. The remaining parameters are optimized to produce the maximum secure key rate. Two main strategies are addressed: with and without two-decoy-states including the optimization of signal to decoy relationship.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability to generate entangled photon pairs over a broad wavelength range opens the door to the simultaneous distribution of entanglement to multiple users in a network by using centralized sources and flexible wavelength-division multiplexing schemes. Here, we show the design of a metropolitan optical network consisting of tree-type access networks, whereby entangled photon pairs are distributed to any pair of users, independent of their location. The network is constructed employing commercial off-the-shelf components and uses the existing infrastructure, which allows for moderate deployment costs. We further develop a channel plan and a network-architecture design to provide a direct optical path between any pair of users; thus, allowing classical and one-way quantum communication, as well as entanglement distribution. This allows the simultaneous operation of multiple quantum information technologies. Finally, we present a more flexible backbone architecture that pushes away the load limitations of the original network design by extending its reach, number of users and capabilities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The threshold behavior of the transport properties of a random metal in the critical region near a metal–insulator transition is strongly affected by the measuring electromagnetic fields. In spite of the randomness, the electrical conductivity exhibits striking phase-coherent effects due to broken symmetry, which greatly sharpen the transition compared with the predictions of effective medium theories, as previously explained for electrical conductivities. Here broken symmetry explains the sign reversal of the T → 0 magnetoconductance of the metal–insulator transition in Si(B,P), also previously not understood by effective medium theories. Finally, the symmetry-breaking features of quantum percolation theory explain the unexpectedly very small electrical conductivity temperature exponent α = 0.22(2) recently observed in Ni(S,Se)2 alloys at the antiferromagnetic metal–insulator transition below T = 0.8 K.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Averaged event-related potential (ERP) data recorded from the human scalp reveal electroencephalographic (EEG) activity that is reliably time-locked and phase-locked to experimental events. We report here the application of a method based on information theory that decomposes one or more ERPs recorded at multiple scalp sensors into a sum of components with fixed scalp distributions and sparsely activated, maximally independent time courses. Independent component analysis (ICA) decomposes ERP data into a number of components equal to the number of sensors. The derived components have distinct but not necessarily orthogonal scalp projections. Unlike dipole-fitting methods, the algorithm does not model the locations of their generators in the head. Unlike methods that remove second-order correlations, such as principal component analysis (PCA), ICA also minimizes higher-order dependencies. Applied to detected—and undetected—target ERPs from an auditory vigilance experiment, the algorithm derived ten components that decomposed each of the major response peaks into one or more ICA components with relatively simple scalp distributions. Three of these components were active only when the subject detected the targets, three other components only when the target went undetected, and one in both cases. Three additional components accounted for the steady-state brain response to a 39-Hz background click train. Major features of the decomposition proved robust across sessions and changes in sensor number and placement. This method of ERP analysis can be used to compare responses from multiple stimuli, task conditions, and subject states.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neuronal responses are conspicuously variable. We focus on one particular aspect of that variability: the precision of action potential timing. We show that for common models of noisy spike generation, elementary considerations imply that such variability is a function of the input, and can be made arbitrarily large or small by a suitable choice of inputs. Our considerations are expected to extend to virtually any mechanism of spike generation, and we illustrate them with data from the visual pathway. Thus, a simplification usually made in the application of information theory to neural processing is violated: noise is not independent of the message. However, we also show the existence of error-correcting topologies, which can achieve better timing reliability than their components.