88 resultados para Random keys
Resumo:
The transmission path from the excitation to the measured vibration on the surface of a mechanical system introduces a distortion both in amplitude and in phase. Moreover, in variable speed conditions, the amplification/attenuation and the phase shift, due to the transfer function of the mechanical system, varies in time. This phenomenon reduces the effectiveness of the traditionally tachometer based order tracking, compromising the results of a discrete-random separation performed by a synchronous averaging. In this paper, for the first time, the extent of the distortion is identified both in the time domain and in the order spectrum of the signal, highlighting the consequences for the diagnostics of rotating machinery. A particular focus is given to gears, providing some indications on how to take advantage of the quantification of the disturbance to better tune the techniques developed for the compensation of the distortion. The full theoretical analysis is presented and the results are applied to an experimental case.
Resumo:
Proxy re-encryption (PRE) is a highly useful cryptographic primitive whereby Alice and Bob can endow a proxy with the capacity to change ciphertext recipients from Alice to Bob, without the proxy itself being able to decrypt, thereby providing delegation of decryption authority. Key-private PRE (KP-PRE) specifies an additional level of confidentiality, requiring pseudo-random proxy keys that leak no information on the identity of the delegators and delegatees. In this paper, we propose a CPA-secure PK-PRE scheme in the standard model (which we then transform into a CCA-secure scheme in the random oracle model). Both schemes enjoy highly desirable properties such as uni-directionality and multi-hop delegation. Unlike (the few) prior constructions of PRE and KP-PRE that typically rely on bilinear maps under ad hoc assumptions, security of our construction is based on the hardness of the standard Learning-With-Errors (LWE) problem, itself reducible from worst-case lattice hard problems that are conjectured immune to quantum cryptanalysis, or “post-quantum”. Of independent interest, we further examine the practical hardness of the LWE assumption, using Kannan’s exhaustive search algorithm coupling with pruning techniques. This leads to state-of-the-art parameters not only for our scheme, but also for a number of other primitives based on LWE published the literature.
Resumo:
Research suggests that the length and quality of police-citizen encounters affect policing outcomes. The Koper Curve, for example, shows that the optimal length for police presence in hot spots is between 14 and 15 minutes, with diminishing returns observed thereafter. Our study, using data from the Queensland Community Engagement Trial (QCET), examines the impact of encounter length on citizen perceptions of police performance. QCET involved a randomised field trial, where 60 random breath test (RBT) traffic stop operations were randomly allocated to an experimental condition involving a procedurally just encounter or a business-as-usual control condition. Our results show that the optimal length of time for procedurally just encounters during RBT traffic stops is just less than 2 minutes. We show, therefore, that it is important to encourage and facilitate positive police–citizen encounters during RBTat traffic stops, while ensuring that the length of these interactions does not pass a point of diminishing returns.
Resumo:
We construct two efficient Identity-Based Encryption (IBE) systems that admit selective-identity security reductions without random oracles in groups equipped with a bilinear map. Selective-identity secure IBE is a slightly weaker security model than the standard security model for IBE. In this model the adversary must commit ahead of time to the identity that it intends to attack, whereas in an adaptive-identity attack the adversary is allowed to choose this identity adaptively. Our first system—BB1—is based on the well studied decisional bilinear Diffie–Hellman assumption, and extends naturally to systems with hierarchical identities, or HIBE. Our second system—BB2—is based on a stronger assumption which we call the Bilinear Diffie–Hellman Inversion assumption and provides another approach to building IBE systems. Our first system, BB1, is very versatile and well suited for practical applications: the basic hierarchical construction can be efficiently secured against chosen-ciphertext attacks, and further extended to support efficient non-interactive threshold decryption, among others, all without using random oracles. Both systems, BB1 and BB2, can be modified generically to provide “full” IBE security (i.e., against adaptive-identity attacks), either using random oracles, or in the standard model at the expense of a non-polynomial but easy-to-compensate security reduction.
Resumo:
We present a technique for delegating a short lattice basis that has the advantage of keeping the lattice dimension unchanged upon delegation. Building on this result, we construct two new hierarchical identity-based encryption (HIBE) schemes, with and without random oracles. The resulting systems are very different from earlier lattice-based HIBEs and in some cases result in shorter ciphertexts and private keys. We prove security from classic lattice hardness assumptions.
Resumo:
To this day, realizations in the standard-model of (lossy) trapdoor functions from discrete-log-type assumptions require large public key sizes, e.g., about Θ(λ 2) group elements for a reduction from the decisional Diffie-Hellman assumption (where λ is a security parameter). We propose two realizations of lossy trapdoor functions that achieve public key size of only Θ(λ) group elements in bilinear groups, with a reduction from the decisional Bilinear Diffie-Hellman assumption. Our first construction achieves this result at the expense of a long common reference string of Θ(λ 2) elements, albeit reusable in multiple LTDF instantiations. Our second scheme also achieves public keys of size Θ(λ), entirely in the standard model and in particular without any reference string, at the cost of a slightly more involved construction. The main technical novelty, developed for the second scheme, is a compact encoding technique for generating compressed representations of certain sequences of group elements for the public parameters.
Resumo:
We revisit the venerable question of access credentials management, which concerns the techniques that we, humans with limited memory, must employ to safeguard our various access keys and tokens in a connected world. Although many existing solutions can be employed to protect a long secret using a short password, those solutions typically require certain assumptions on the distribution of the secret and/or the password, and are helpful against only a subset of the possible attackers. After briefly reviewing a variety of approaches, we propose a user-centric comprehensive model to capture the possible threats posed by online and offline attackers, from the outside and the inside, against the security of both the plaintext and the password. We then propose a few very simple protocols, adapted from the Ford-Kaliski server-assisted password generator and the Boldyreva unique blind signature in particular, that provide the best protection against all kinds of threats, for all distributions of secrets. We also quantify the concrete security of our approach in terms of online and offline password guesses made by outsiders and insiders, in the random-oracle model. The main contribution of this paper lies not in the technical novelty of the proposed solution, but in the identification of the problem and its model. Our results have an immediate and practical application for the real world: they show how to implement single-sign-on stateless roaming authentication for the internet, in a ad-hoc user-driven fashion that requires no change to protocols or infrastructure.
Resumo:
We describe a short signature scheme that is strongly existentially unforgeable under an adaptive chosen message attack in the standard security model. Our construction works in groups equipped with an efficient bilinear map, or, more generally, an algorithm for the Decision Diffie-Hellman problem. The security of our scheme depends on a new intractability assumption we call Strong Diffie-Hellman (SDH), by analogy to the Strong RSA assumption with which it shares many properties. Signature generation in our system is fast and the resulting signatures are as short as DSA signatures for comparable security. We give a tight reduction proving that our scheme is secure in any group in which the SDH assumption holds, without relying on the random oracle model.
Resumo:
For the past several decades, cryptographers have consistently provided us with stronger and more capable primitives and protocols that have found many applications in security systems in everyday life. One of the central tenets of cryptographic design is that, whereas a system’s architecture ought to be public and open to scrutiny, the keys on which it depends — long, utterly random, unique strings of bits — will be perfectly preserved by their owner, and yet nominally inaccessible to foes.
Resumo:
RC4(n, m) is a stream cipher based on RC4 and is designed by G. Gong et al. It can be seen as a generalization of the famous RC4 stream cipher designed by Ron Rivest. The authors of RC4(n, m) claim that the cipher resists all the attacks that are successful against the original RC4. The paper reveals cryptographic weaknesses of the RC4(n, m) stream cipher. We develop two attacks. The first one is based on non-randomness of internal state and allows to distinguish it from a truly random cipher by an algorithm that has access to 24·n bits of the keystream. The second attack exploits low diffusion of bits in the KSA and PRGA algorithms and recovers all bytes of the secret key. This attack works only if the initial value of the cipher can be manipulated. Apart from the secret key, the cipher uses two other inputs, namely, initial value and initial vector. Although these inputs are fixed in the cipher specification, some applications may allow the inputs to be under the attacker control. Assuming that the attacker can control the initial value, we show a distinguisher for the cipher and a secret key recovery attack that for the L-bit secret key, is able to recover it with about (L/n) · 2n steps. The attack has been implemented on a standard PC and can reconstruct the secret key of RC(8, 32) in less than a second.
Resumo:
Robust facial expression recognition (FER) under occluded face conditions is challenging. It requires robust algorithms of feature extraction and investigations into the effects of different types of occlusion on the recognition performance to gain insight. Previous FER studies in this area have been limited. They have spanned recovery strategies for loss of local texture information and testing limited to only a few types of occlusion and predominantly a matched train-test strategy. This paper proposes a robust approach that employs a Monte Carlo algorithm to extract a set of Gabor based part-face templates from gallery images and converts these templates into template match distance features. The resulting feature vectors are robust to occlusion because occluded parts are covered by some but not all of the random templates. The method is evaluated using facial images with occluded regions around the eyes and the mouth, randomly placed occlusion patches of different sizes, and near-realistic occlusion of eyes with clear and solid glasses. Both matched and mis-matched train and test strategies are adopted to analyze the effects of such occlusion. Overall recognition performance and the performance for each facial expression are investigated. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the high robustness and fast processing speed of our approach, and provide useful insight into the effects of occlusion on FER. The results on the parameter sensitivity demonstrate a certain level of robustness of the approach to changes in the orientation and scale of Gabor filters, the size of templates, and occlusions ratios. Performance comparisons with previous approaches show that the proposed method is more robust to occlusion with lower reductions in accuracy from occlusion of eyes or mouth.
Resumo:
A key derivation function (KDF) is a function that transforms secret non-uniformly random source material together with some public strings into one or more cryptographic keys. These cryptographic keys are used with a cryptographic algorithm for protecting electronic data during both transmission over insecure channels and storage. In this thesis, we propose a new method for constructing a generic stream cipher based key derivation function. We show that our proposed key derivation function based on stream ciphers is secure if the under-lying stream cipher is secure. We simulate instances of this stream cipher based key derivation function using three eStream nalist: Trivium, Sosemanuk and Rabbit. The simulation results show these stream cipher based key derivation functions offer efficiency advantages over the more commonly used key derivation functions based on block ciphers and hash functions.
Resumo:
Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.