976 resultados para Lattice-based cryptography
Resumo:
Continuum diffusion models are often used to represent the collective motion of cell populations. Most previous studies have simply used linear diffusion to represent collective cell spreading, while others found that degenerate nonlinear diffusion provides a better match to experimental cell density profiles. In the cell modeling literature there is no guidance available with regard to which approach is more appropriate for representing the spreading of cell populations. Furthermore, there is no knowledge of particular experimental measurements that can be made to distinguish between situations where these two models are appropriate. Here we provide a link between individual-based and continuum models using a multi-scale approach in which we analyze the collective motion of a population of interacting agents in a generalized lattice-based exclusion process. For round agents that occupy a single lattice site, we find that the relevant continuum description of the system is a linear diffusion equation, whereas for elongated rod-shaped agents that occupy L adjacent lattice sites we find that the relevant continuum description is connected to the porous media equation (pme). The exponent in the nonlinear diffusivity function is related to the aspect ratio of the agents. Our work provides a physical connection between modeling collective cell spreading and the use of either the linear diffusion equation or the pme to represent cell density profiles. Results suggest that when using continuum models to represent cell population spreading, we should take care to account for variations in the cell aspect ratio because different aspect ratios lead to different continuum models.
Resumo:
Secrecy of decryption keys is an important pre-requisite for security of any encryption scheme and compromised private keys must be immediately replaced. \emph{Forward Security (FS)}, introduced to Public Key Encryption (PKE) by Canetti, Halevi, and Katz (Eurocrypt 2003), reduces damage from compromised keys by guaranteeing confidentiality of messages that were encrypted prior to the compromise event. The FS property was also shown to be achievable in (Hierarchical) Identity-Based Encryption (HIBE) by Yao, Fazio, Dodis, and Lysyanskaya (ACM CCS 2004). Yet, for emerging encryption techniques, offering flexible access control to encrypted data, by means of functional relationships between ciphertexts and decryption keys, FS protection was not known to exist.\smallskip In this paper we introduce FS to the powerful setting of \emph{Hierarchical Predicate Encryption (HPE)}, proposed by Okamoto and Takashima (Asiacrypt 2009). Anticipated applications of FS-HPE schemes can be found in searchable encryption and in fully private communication. Considering the dependencies amongst the concepts, our FS-HPE scheme implies forward-secure flavors of Predicate Encryption and (Hierarchical) Attribute-Based Encryption.\smallskip Our FS-HPE scheme guarantees forward security for plaintexts and for attributes that are hidden in HPE ciphertexts. It further allows delegation of decrypting abilities at any point in time, independent of FS time evolution. It realizes zero-inner-product predicates and is proven adaptively secure under standard assumptions. As the ``cross-product" approach taken in FS-HIBE is not directly applicable to the HPE setting, our construction resorts to techniques that are specific to existing HPE schemes and extends them with what can be seen as a reminiscent of binary tree encryption from FS-PKE.
Resumo:
Moving fronts of cells are essential features of embryonic development, wound repair and cancer metastasis. This paper describes a set of experiments to investigate the roles of random motility and proliferation in driving the spread of an initially confined cell population. The experiments include an analysis of cell spreading when proliferation was inhibited. Our data have been analysed using two mathematical models: a lattice-based discrete model and a related continuum partial differential equation model. We obtain independent estimates of the random motility parameter, D, and the intrinsic proliferation rate, λ, and we confirm that these estimates lead to accurate modelling predictions of the position of the leading edge of the moving front as well as the evolution of the cell density profiles. Previous work suggests that systems with a high λ/D ratio will be characterized by steep fronts, whereas systems with a low λ/D ratio will lead to shallow diffuse fronts and this is confirmed in the present study. Our results provide evidence that continuum models, based on the Fisher–Kolmogorov equation, are a reliable platform upon which we can interpret and predict such experimental observations.
Resumo:
Basing signature schemes on strong lattice problems has been a long standing open issue. Today, two families of lattice-based signature schemes are known: the ones based on the hash-and-sign construction of Gentry et al.; and Lyubashevsky’s schemes, which are based on the Fiat-Shamir framework. In this paper we show for the first time how to adapt the schemes of Lyubashevsky to the ring signature setting. In particular we transform the scheme of ASIACRYPT 2009 into a ring signature scheme that provides strong properties of security under the random oracle model. Anonymity is ensured in the sense that signatures of different users are within negligible statistical distance even under full key exposure. In fact, the scheme satisfies a notion which is stronger than the classical full key exposure setting as even if the keypair of the signing user is adversarially chosen, the statistical distance between signatures of different users remains negligible. Considering unforgeability, the best lattice-based ring signature schemes provide either unforgeability against arbitrary chosen subring attacks or insider corruption in log-sized rings. In this paper we present two variants of our scheme. In the basic one, unforgeability is ensured in those two settings. Increasing signature and key sizes by a factor k (typically 80 − 100), we provide a variant in which unforgeability is ensured against insider corruption attacks for arbitrary rings. The technique used is pretty general and can be adapted to other existing schemes.
Resumo:
We offer an exposition of Boneh, Boyen, and Goh’s “uber-assumption” family for analyzing the validity and strength of pairing assumptions in the generic-group model, and augment the original BBG framework with a few simple but useful extensions.
Resumo:
The cryptographic community has, of late, shown much inventiveness in the creation of powerful new IBE-like primitives that go beyond the basic IBE notion and extend it in many new directions. Virtually all of these “super-IBE” schemes rely on bilinear pairings for their implementation, which they tend to use in a surprisingly small number of different ways: three of them as of this writing. What is interesting is that, among the three main frameworks that we know of so far, one has acted as a veritable magnet for the construction of many of these “generalized IBE” primitives, whereas the other two have not been nearly as fruitful in that respect. This refers to the Commutative Blinding framework defined by the Boneh-Boyen [Bscr ][Bscr ]1 IBE scheme from 2004. The aim of this chapter is to try to shed some light on this approach's popularity, first by comparing its key properties with those of the competing frameworks, and then by providing a number of examples that illustrate how those properties have been used.
Resumo:
The purpose of this chapter is to provide an abstraction for the class of Exponent-Inversion IBE exemplified by the [Bscr ][Bscr ]2 and [Sscr ][Kscr ] schemes, and, on the basis of that abstraction, to show that those schemes do support interesting and useful extensions such as HIBE and ABE. Our results narrow, if not entirely close, the “flexibility gap” between the Exponent-Inversion and Commutative-Blinding IBE concepts.
Resumo:
We show that the LASH-x hash function is vulnerable to attacks that trade time for memory, including collision attacks as fast as 2(4x/11) and preimage attacks as fast as 2(4x/7). Moreover, we briefly mention heuristic lattice based collision attacks that use small memory but require very long messages that are expected to find collisions much faster than 2 x/2. All of these attacks exploit the designers’ choice of an all zero IV. We then consider whether LASH can be patched simply by changing the IV. In this case, we show that LASH is vulnerable to a 2(7x/8) preimage attack. We also show that LASH is trivially not a PRF when any subset of input bytes is used as a secret key. None of our attacks depend upon the particular contents of the LASH matrix – we only assume that the distribution of elements is more or less uniform.
Resumo:
In this survey, we review a number of the many “expressive” encryption systems that have recently appeared from lattices, and explore the innovative techniques that underpin them.
Resumo:
This book constitutes the refereed proceedings of the 11th International Conference on Cryptology and Network Security, CANS 2012, held in Darmstadt, Germany, in December 2012. The 22 revised full papers, presented were carefully reviewed and selected from 99 submissions. The papers are organized in topical sections on cryptanalysis; network security; cryptographic protocols; encryption; and s-box theory.
Resumo:
Lattice-based cryptographic primitives are believed to offer resilience against attacks by quantum computers. We demonstrate the practicality of post-quantum key exchange by constructing cipher suites for the Transport Layer Security (TLS) protocol that provide key exchange based on the ring learning with errors (R-LWE) problem, we accompany these cipher suites with a rigorous proof of security. Our approach ties lattice-based key exchange together with traditional authentication using RSA or elliptic curve digital signatures: the post-quantum key exchange provides forward secrecy against future quantum attackers, while authentication can be provided using RSA keys that are issued by today's commercial certificate authorities, smoothing the path to adoption. Our cryptographically secure implementation, aimed at the 128-bit security level, reveals that the performance price when switching from non-quantum-safe key exchange is not too high. With our R-LWE cipher suites integrated into the Open SSL library and using the Apache web server on a 2-core desktop computer, we could serve 506 RLWE-ECDSA-AES128-GCM-SHA256 HTTPS connections per second for a 10 KiB payload. Compared to elliptic curve Diffie-Hellman, this means an 8 KiB increased handshake size and a reduction in throughput of only 21%. This demonstrates that provably secure post-quantum key-exchange can already be considered practical.
Resumo:
This thesis investigates the use of fusion techniques and mathematical modelling to increase the robustness of iris recognition systems against iris image quality degradation, pupil size changes and partial occlusion. The proposed techniques improve recognition accuracy and enhance security. They can be further developed for better iris recognition in less constrained environments that do not require user cooperation. A framework to analyse the consistency of different regions of the iris is also developed. This can be applied to improve recognition systems using partial iris images, and cancelable biometric signatures or biometric based cryptography for privacy protection.
Resumo:
Enthused by the fascinating properties of graphene, we have prepared graphene analogues of BN by a chemical method with a control on the number of layers. The method involves the reaction of boric acid with urea, wherein the relative proportions of the two have been varied over a wide range. Synthesis with a high proportion of urea yields a product with a majority of 1-4 layers. The surface area of BN increases progressively with the decreasing number of layers, and the high surface area BN exhibits high CO, adsorption, but negligible H, adsorption. Few-layer BN has been solubilized by interaction with Lewis bases. We have used first-principles simulations to determine structure, phonon dispersion, and elastic properties of BN with planar honeycomb lattice-based n-layer forms. We find that the mechanical stability of BN with respect to out-of-plane deformation is quite different from that of graphene, as evident in the dispersion of their flexural modes. BN is softer than graphene and exhibits signatures of long-range ionic interactions in its optical phonons. Finally, structures with different stacking sequences of BN have comparable energies, suggesting relative abundance of slip faults, stacking faults, and structural inhomogeneities in multilayer BN.
Resumo:
This paper discusses the Cambridge University HTK (CU-HTK) system for the automatic transcription of conversational telephone speech. A detailed discussion of the most important techniques in front-end processing, acoustic modeling and model training, language and pronunciation modeling are presented. These include the use of conversation side based cepstral normalization, vocal tract length normalization, heteroscedastic linear discriminant analysis for feature projection, minimum phone error training and speaker adaptive training, lattice-based model adaptation, confusion network based decoding and confidence score estimation, pronunciation selection, language model interpolation, and class based language models. The transcription system developed for participation in the 2002 NIST Rich Transcription evaluations of English conversational telephone speech data is presented in detail. In this evaluation the CU-HTK system gave an overall word error rate of 23.9%, which was the best performance by a statistically significant margin. Further details on the derivation of faster systems with moderate performance degradation are discussed in the context of the 2002 CU-HTK 10 × RT conversational speech transcription system. © 2005 IEEE.
Resumo:
Obtaining accurate confidence measures for automatic speech recognition (ASR) transcriptions is an important task which stands to benefit from the use of multiple information sources. This paper investigates the application of conditional random field (CRF) models as a principled technique for combining multiple features from such sources. A novel method for combining suitably defined features is presented, allowing for confidence annotation using lattice-based features of hypotheses other than the lattice 1-best. The resulting framework is applied to different stages of a state-of-the-art large vocabulary speech recognition pipeline, and consistent improvements are shown over a sophisticated baseline system. Copyright © 2011 ISCA.