15 resultados para EXPONENTIATION
Resumo:
Client puzzles are moderately-hard cryptographic problems neither easy nor impossible to solve that can be used as a counter-measure against denial of service attacks on network protocols. Puzzles based on modular exponentiation are attractive as they provide important properties such as non-parallelisability, deterministic solving time, and linear granularity. We propose an efficient client puzzle based on modular exponentiation. Our puzzle requires only a few modular multiplications for puzzle generation and verification. For a server under denial of service attack, this is a significant improvement as the best known non-parallelisable puzzle proposed by Karame and Capkun (ESORICS 2010) requires at least 2k-bit modular exponentiation, where k is a security parameter. We show that our puzzle satisfies the unforgeability and difficulty properties defined by Chen et al. (Asiacrypt 2009). We present experimental results which show that, for 1024-bit moduli, our proposed puzzle can be up to 30 times faster to verify than the Karame-Capkun puzzle and 99 times faster than the Rivest et al.'s time-lock puzzle.
Resumo:
We study the scattering of hard external particles in a heat bath in a real-time formalism for finite temperature QED. We investigate the distribution of the 4-momentum difference of initial and final hard particles in a fully covariant manner when the scale of the process, Q, is much larger than the temperature, T. Our computations are valid for all T subject to this constraint. We exponentiate the leading infra-red term at one-loop order through a resummation of soft (thermal) photon emissions and absorptions. For T > 0, we find that tensor structures arise which are not present at T = 0. These carry thermal signatures. As a result, external particles can serve as thermometers introduced into the heat bath. We investigate the phase space origin of log (Q/M) and log (Q/T) terms.
Resumo:
The high performance and capacity of current FPGAs makes them suitable as acceleration co-processors. This article studies the implementation, for such accelerators, of the floating-point power function xy as defined by the C99 and IEEE 754-2008 standards, generalized here to arbitrary exponent and mantissa sizes. Last-bit accuracy at the smallest possible cost is obtained thanks to a careful study of the various subcomponents: a floating-point logarithm, a modified floating-point exponential, and a truncated floating-point multiplier. A parameterized architecture generator in the open-source FloPoCo project is presented in details and evaluated.
Resumo:
To provide more efficient and flexible alternatives for the applications of secret sharing schemes, this paper describes a threshold sharing scheme based on exponentiation of matrices in Galois fields. A significant characteristic of the proposed scheme is that each participant has to keep only one master secret share which can be used to reconstruct different group secrets according to the number of threshold values.
Resumo:
The material presented in this thesis may be viewed as comprising two key parts, the first part concerns batch cryptography specifically, whilst the second deals with how this form of cryptography may be applied to security related applications such as electronic cash for improving efficiency of the protocols. The objective of batch cryptography is to devise more efficient primitive cryptographic protocols. In general, these primitives make use of some property such as homomorphism to perform a computationally expensive operation on a collective input set. The idea is to amortise an expensive operation, such as modular exponentiation, over the input. Most of the research work in this field has concentrated on its employment as a batch verifier of digital signatures. It is shown that several new attacks may be launched against these published schemes as some weaknesses are exposed. Another common use of batch cryptography is the simultaneous generation of digital signatures. There is significantly less previous work on this area, and the present schemes have some limited use in practical applications. Several new batch signatures schemes are introduced that improve upon the existing techniques and some practical uses are illustrated. Electronic cash is a technology that demands complex protocols in order to furnish several security properties. These typically include anonymity, traceability of a double spender, and off-line payment features. Presently, the most efficient schemes make use of coin divisibility to withdraw one large financial amount that may be progressively spent with one or more merchants. Several new cash schemes are introduced here that make use of batch cryptography for improving the withdrawal, payment, and deposit of electronic coins. The devised schemes apply both to the batch signature and verification techniques introduced, demonstrating improved performance over the contemporary divisible based structures. The solutions also provide an alternative paradigm for the construction of electronic cash systems. Whilst electronic cash is used as the vehicle for demonstrating the relevance of batch cryptography to security related applications, the applicability of the techniques introduced extends well beyond this.
Resumo:
In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.
Resumo:
Barreto-Lynn-Scott (BLS) curves are a stand-out candidate for implementing high-security pairings. This paper shows that particular choices of the pairing-friendly search parameter give rise to four subfami- lies of BLS curves, all of which offer highly efficient and implementation- friendly pairing instantiations. Curves from these particular subfamilies are defined over prime fields that support very efficient towering options for the full extension field. The coefficients for a specific curve and its correct twist are automat-ically determined without any computational effort. The choice of an extremely sparse search parameter is immediately reflected by a highly efficient optimal ate Miller loop and final exponentiation. As a resource for implementors, we give a list with examples of implementation-friendly BLS curves through several high-security levels.
Resumo:
In this paper a novel scalable public-key processor architecture is presented that supports modular exponentiation and Elliptic Curve Cryptography over both prime GF(p) and binary GF(2) extension fields. This is achieved by a high performance instruction set that provides a comprehensive range of integer and polynomial basis field arithmetic. The instruction set and associated hardware are generic in nature and do not specifically support any cryptographic algorithms or protocols. Firmware within the device is used to efficiently implement complex and data intensive arithmetic. A firmware library has been developed in order to demonstrate support for numerous exponentiation and ECC approaches, such as different coordinate systems and integer recoding methods. The processor has been developed as a high-performance asymmetric cryptography platform in the form of a scalable Verilog RTL core. Various features of the processor may be scaled, such as the pipeline width and local memory subsystem, in order to suit area, speed and power requirements. The processor is evaluated and compares favourably with previous work in terms of performance while offering an unparalleled degree of flexibility. © 2006 IEEE.
Resumo:
The effect of non-uniform target illumination on the soft X-ray lasing output intensity of the J=2-1 Ne-like Ge transitions as a function of length was investigated. As the degree of nonunifonnity increased with length the Ne-like Ge 23.2 and 23.6 nm J=2-1 transitions did not show exponentiation of output intensity. Using an experimentally measured gain-intensity scaling relationship these results were modelled and good qualitative agreement obtained. The model indicates that for Ge targets which are non-uniformly illuminated, even with peak to valley ratios of up to 3 efficient operation can be achieved at between 2-3x threshold intensity. Further studies of the effect of increasing the separation between the two targets of a double target geometry are also presented.
Resumo:
The overall aim of the work presented in this paper has been to develop Montgomery modular multiplication architectures suitable for implementation on modern reconfigurable hardware. Accordingly, novel high-radix systolic array Montgomery multiplier designs are presented, as we believe that the inherent regular structure and absence of global interconnect associated with these, make them well-suited for implementation on modern FPGAs. Unlike previous approaches, each processing element (PE) comprises both an adder and a multiplier. The inclusion of a multiplier in the PE means that the need to pre-compute or store any multiples of the operands is avoided. This also allows very high-radix implementations to be realised, further reducing the amount of clock cycles per modular multiplication, while still maintaining a competitive critical delay. For demonstrative purposes, 512-bit and 1024-bit FPGA implementations using radices of 2(8) and 2(16) are presented. The subsequent throughput rates are the fastest reported to date.
Resumo:
Les ombres sont un élément important pour la compréhension d'une scène. Grâce à elles, il est possible de résoudre des situations autrement ambigües, notamment concernant les mouvements, ou encore les positions relatives des objets de la scène. Il y a principalement deux types d'ombres: des ombres dures, aux limites très nettes, qui résultent souvent de lumières ponctuelles ou directionnelles; et des ombres douces, plus floues, qui contribuent à l'atmosphère et à la qualité visuelle de la scène. Les ombres douces résultent de grandes sources de lumière, comme des cartes environnementales, et sont difficiles à échantillonner efficacement en temps réel. Lorsque l'interactivité est prioritaire sur la qualité, des méthodes d'approximation peuvent être utilisées pour améliorer le rendu d'une scène à moindre coût en temps de calcul. Nous calculons interactivement les ombres douces résultant de sources de lumière environnementales, pour des scènes composées d'objets en mouvement et d'un champ de hauteurs dynamique. Notre méthode enrichit la méthode d'exponentiation des harmoniques sphériques, jusque là limitée aux bloqueurs sphériques, pour pouvoir traiter des champs de hauteurs. Nous ajoutons également une représentation pour les BRDFs diffuses et glossy. Nous pouvons ainsi combiner les visibilités et BRDFs dans un même espace, afin de calculer efficacement les ombres douces et les réflexions de scènes complexes. Un algorithme hybride, qui associe les visibilités en espace écran et en espace objet, permet de découpler la complexité des ombres de la complexité de la scène.
Resumo:
In line with the process of financialization and globalization of capital, which has intensified in all latitudes of the globe, the world of work is permeated by his determinations arising and also has been (re) setting from numerous changes expressed by example, in the unbridled expansion of temporary forms of work activities, and flexible outsourced by the growth of informality, forming a new morphology of work. However, regardless of how these forms are expressed in concrete materiality, there is something that unifies: all of them are marked by exponentiation of insecurity and hence the numerous negative effects on the lives of individuals who need to sell their labor power to survive. Given this premise, the present work is devoted to study, within the framework of the Brazilian particularities of transition between Fordism and Toyotism, what we call composite settings of the conditions and labor relations processed within the North river- textile industry Grande. To this end, guided by historical and dialectical materialism, we made use of social research in its qualitative aspect, using semi-structured interviews, in addition to literature review, information retrieval and use of field notes. From our raids, we note that between the time span stretching from the 1990s to the current year, the Natal textile industry has been undergoing a process of successive and intense changes in their modus operandi, geared specifically to the organization and labor management causing, concomitantly, several repercussions for the entire working class.
Resumo:
In line with the process of financialization and globalization of capital, which has intensified in all latitudes of the globe, the world of work is permeated by his determinations arising and also has been (re) setting from numerous changes expressed by example, in the unbridled expansion of temporary forms of work activities, and flexible outsourced by the growth of informality, forming a new morphology of work. However, regardless of how these forms are expressed in concrete materiality, there is something that unifies: all of them are marked by exponentiation of insecurity and hence the numerous negative effects on the lives of individuals who need to sell their labor power to survive. Given this premise, the present work is devoted to study, within the framework of the Brazilian particularities of transition between Fordism and Toyotism, what we call composite settings of the conditions and labor relations processed within the North river- textile industry Grande. To this end, guided by historical and dialectical materialism, we made use of social research in its qualitative aspect, using semi-structured interviews, in addition to literature review, information retrieval and use of field notes. From our raids, we note that between the time span stretching from the 1990s to the current year, the Natal textile industry has been undergoing a process of successive and intense changes in their modus operandi, geared specifically to the organization and labor management causing, concomitantly, several repercussions for the entire working class.