274 resultados para Randomness


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the use of randomness extraction and expansion in key agreement (KA) pro- tocols to generate uniformly random keys in the standard model. Although existing works provide the basic theorems necessary, they lack details or examples of appropriate cryptographic primitives and/or parameter sizes. This has lead to the large amount of min-entropy needed in the (non-uniform) shared secret being overlooked in proposals and efficiency comparisons of KA protocols. We therefore summa- rize existing work in the area and examine the security levels achieved with the use of various extractors and expanders for particular parameter sizes. The tables presented herein show that the shared secret needs a min-entropy of at least 292 bits (and even more with more realistic assumptions) to achieve an overall security level of 80 bits using the extractors and expanders we consider. The tables may be used to �nd the min-entropy required for various security levels and assumptions. We also �nd that when using the short exponent theorems of Gennaro et al., the short exponents may need to be much longer than they suggested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data quality has become a major concern for organisations. The rapid growth in the size and technology of a databases and data warehouses has brought significant advantages in accessing, storing, and retrieving information. At the same time, great challenges arise with rapid data throughput and heterogeneous accesses in terms of maintaining high data quality. Yet, despite the importance of data quality, literature has usually condensed data quality into detecting and correcting poor data such as outliers, incomplete or inaccurate values. As a result, organisations are unable to efficiently and effectively assess data quality. Having an accurate and proper data quality assessment method will enable users to benchmark their systems and monitor their improvement. This paper introduces a granules mining for measuring the random degree of error data which will enable decision makers to conduct accurate quality assessment and allocate the most severe data, thereby providing an accurate estimation of human and financial resources for conducting quality improvement tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The probability distribution for the displacement x of a particle moving in a one-dimensional continuum is derived exactly for the general case of combined static and dynamic gaussian randomness of the applied force. The dynamics of the particle is governed by the high-friction limit of Brownian motion discussed originally by Einstein and Smoluchowski. In particular, the mean square displacement of the particle varies as t2 for t to infinity . This ballistic motion induced by the disorder does not give rise to a 1/f power spectrum, contrary to recent suggestions based on the above dynamical model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An invariant imbedding method yields exact analytical results for the distribution of the phase theta (L) of the reflection amplitude and for low-order resistance moments (pn) for a disordered conductor of length L in the quasi-metallic regime L<

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When the size (L) of a one-dimensional metallic conductor is less than the correlation length λ-1 of the Gaussian random potential, one expects transport properties to show ballistic behaviour. Using an invariant imbedding method, we study the exact distribution of the resistance, of the phase θ of the reflection amplitude of an incident electron of wave number k0, and of dθ/dk0, for λL ll 1. The resistance is non-self-averaging and the n-th resistance moment varies periodically as (1 - cos 2k0L)n. The charge fluctuation noise, determined by the distribution of dθ/dk0, is constant at low frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.

The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.

In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is about performance assessment in serious games. We conceive serious gaming as a process of player-lead decision taking. Starting from combinatorics and item-response theory we provide an analytical model that makes explicit to what extent observed player performances (decisions) are blurred by chance processes (guessing behaviors). We found large effects both theoretically and practically. In two existing serious games random guess scores were found to explain up to 41% of total scores. Monte Carlo simulation of random game play confirmed the substantial impact of randomness on performance. For valid performance assessments, be it in-game or post-game, the effects of randomness should be included to produce re-calibrated scores that can reasonably be interpreted as the players´ achievements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Key generation from the randomness of wireless channels is a promising technique to establish a secret cryptographic key securely between legitimate users. This paper proposes a new approach to extract keys efficiently from channel responses of individual orthogonal frequency-division multiplexing (OFDM) subcarriers. The efficiency is achieved by (i) fully exploiting randomness from time and frequency domains and (ii) improving the cross-correlation of the channel measurements. Through the theoretical modelling of the time and frequency autocorrelation relationship of the OFDM subcarrier's channel responses, we can obtain the optimal probing rate and use multiple uncorrelated subcarriers as random sources. We also study the effects of non-simultaneous measurements and noise on the cross-correlation of the channel measurements. We find the cross-correlation is mainly impacted by noise effects in a slow fading channel and use a low pass filter (LPF) to reduce the key disagreement rate and extend the system's working signal-to-noise ratio range. The system is evaluated in terms of randomness, key generation rate, and key disagreement rate, verifying that it is feasible to extract randomness from both time and frequency domains of the OFDM subcarrier's channel responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new general fitting method based on the Self-Similar (SS) organization of random sequences is presented. The proposed analytical function helps to fit the response of many complex systems when their recorded data form a self-similar curve. The verified SS principle opens new possibilities for the fitting of economical, meteorological and other complex data when the mathematical model is absent but the reduced description in terms of some universal set of the fitting parameters is necessary. This fitting function is verified on economical (price of a commodity versus time) and weather (the Earth’s mean temperature surface data versus time) and for these nontrivial cases it becomes possible to receive a very good fit of initial data set. The general conditions of application of this fitting method describing the response of many complex systems and the forecast possibilities are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ce Texte Presente Plusieurs Resultats Exacts Sur les Seconds Moments des Autocorrelations Echantillonnales, Pour des Series Gaussiennes Ou Non-Gaussiennes. Nous Donnons D'abord des Formules Generales Pour la Moyenne, la Variance et les Covariances des Autocorrelations Echantillonnales, Dans le Cas Ou les Variables de la Serie Sont Interchangeables. Nous Deduisons de Celles-Ci des Bornes Pour les Variances et les Covariances des Autocorrelations Echantillonnales. Ces Bornes Sont Utilisees Pour Obtenir des Limites Exactes Sur les Points Critiques Lorsqu'on Teste le Caractere Aleatoire D'une Serie Chronologique, Sans Qu'aucune Hypothese Soit Necessaire Sur la Forme de la Distribution Sous-Jacente. Nous Donnons des Formules Exactes et Explicites Pour les Variances et Covariances des Autocorrelations Dans le Cas Ou la Serie Est un Bruit Blanc Gaussien. Nous Montrons Que Ces Resultats Sont Aussi Valides Lorsque la Distribution de la Serie Est Spheriquement Symetrique. Nous Presentons les Resultats D'une Simulation Qui Indiquent Clairement Qu'on Approxime Beaucoup Mieux la Distribution des Autocorrelations Echantillonnales En Normalisant Celles-Ci Avec la Moyenne et la Variance Exactes et En Utilisant la Loi N(0,1) Asymptotique, Plutot Qu'en Employant les Seconds Moments Approximatifs Couramment En Usage. Nous Etudions Aussi les Variances et Covariances Exactes D'autocorrelations Basees Sur les Rangs des Observations.