760 resultados para ATTACKS
Resumo:
Network coding is a method for achieving channel capacity in networks. The key idea is to allow network routers to linearly mix packets as they traverse the network so that recipients receive linear combinations of packets. Network coded systems are vulnerable to pollution attacks where a single malicious node floods the network with bad packets and prevents the receiver from decoding correctly. Cryptographic defenses to these problems are based on homomorphic signatures and MACs. These proposals, however, cannot handle mixing of packets from multiple sources, which is needed to achieve the full benefits of network coding. In this paper we address integrity of multi-source mixing. We propose a security model for this setting and provide a generic construction.
Resumo:
Multiple-time signatures are digital signature schemes where the signer is able to sign a predetermined number of messages. They are interesting cryptographic primitives because they allow to solve many important cryptographic problems, and at the same time offer substantial efficiency advantage over ordinary digital signature schemes like RSA. Multiple-time signature schemes have found numerous applications, in ordinary, on-line/off-line, forward-secure signatures, and multicast/stream authentication. We propose a multiple-time signature scheme with very efficient signing and verifying. Our construction is based on a combination of one-way functions and cover-free families, and it is secure against the adaptive chosen-message attack.
Resumo:
We examine the security of the 64-bit lightweight block cipher PRESENT-80 against related-key differential attacks. With a computer search we are able to prove that for any related-key differential characteristic on full-round PRESENT-80, the probability of the characteristic only in the 64-bit state is not higher than 2−64. To overcome the exponential (in the state and key sizes) computational complexity of the search we use truncated differences, however as the key schedule is not nibble oriented, we switch to actual differences and apply early abort techniques to prune the tree-based search. With a new method called extended split approach we are able to make the whole search feasible and we implement and run it in real time. Our approach targets the PRESENT-80 cipher however,with small modifications can be reused for other lightweight ciphers as well.
Resumo:
At NDSS 2012, Yan et al. analyzed the security of several challenge-response type user authentication protocols against passive observers, and proposed a generic counting based statistical attack to recover the secret of some counting based protocols given a number of observed authentication sessions. Roughly speaking, the attack is based on the fact that secret (pass) objects appear in challenges with a different probability from non-secret (decoy) objects when the responses are taken into account. Although they mentioned that a protocol susceptible to this attack should minimize this difference, they did not give details as to how this can be achieved barring a few suggestions. In this paper, we attempt to fill this gap by generalizing the attack with a much more comprehensive theoretical analysis. Our treatment is more quantitative which enables us to describe a method to theoretically estimate a lower bound on the number of sessions a protocol can be safely used against the attack. Our results include 1) two proposed fixes to make counting protocols practically safe against the attack at the cost of usability, 2) the observation that the attack can be used on non-counting based protocols too as long as challenge generation is contrived, 3) and two main design principles for user authentication protocols which can be considered as extensions of the principles from Yan et al. This detailed theoretical treatment can be used as a guideline during the design of counting based protocols to determine their susceptibility to this attack. The Foxtail protocol, one of the protocols analyzed by Yan et al., is used as a representative to illustrate our theoretical and experimental results.
Resumo:
There has been tremendous interest in watermarking multimedia content during the past two decades, mainly for proving ownership and detecting tamper. Digital fingerprinting, that deals with identifying malicious user(s), has also received significant attention. While extensive work has been carried out in watermarking of images, other multimedia objects still have enormous research potential. Watermarking database relations is one of the several areas which demand research focus owing to the commercial implications of database theft. Recently, there has been little progress in database watermarking, with most of the watermarking schemes modeled after the irreversible database watermarking scheme proposed by Agrawal and Kiernan. Reversibility is the ability to re-generate the original (unmarked) relation from the watermarked relation using a secret key. As explained in our paper, reversible watermarking schemes provide greater security against secondary watermarking attacks, where an attacker watermarks an already marked relation in an attempt to erase the original watermark. This paper proposes an improvement over the reversible and blind watermarking scheme presented in [5], identifying and eliminating a critical problem with the previous model. Experiments showing that the average watermark detection rate is around 91% even with attacker distorting half of the attributes. The current scheme provides security against secondary watermarking attacks.
Resumo:
In the developing digital economy, the notion of traditional attack on enterprises of national significance or interest has transcended into different modes of electronic attack, surpassing accepted traditional forms of physical attack upon a target. The terrorist attacks that took place in the United States on September 11, 2001 demonstrated the physical devastation that could occur if any nation were the target of a large-scale terrorist attack. Therefore, there is a need to protect criticalnational infrastructure and critical information infrastructure. In particular,this protection is crucial for the proper functioning of a modern society and for a government to fulfill one of its most important prerogatives – namely, the protection of its people. Computer networks have many benefits that governments, corporations, and individuals alike take advantage of in order to promote and perform their duties and roles. Today, there is almost complete dependence on private sector telecommunication infrastructures and the associated computer hardware and software systems.1 These infrastructures and systems even support government and defense activity.2 This Article discusses possible attacks on critical information infrastructures and the government reactions to these attacks.
Resumo:
This paper presents algebraic attacks on SOBER-t32 and SOBER-t16 without stuttering. For unstuttered SOBER-t32, two different attacks are implemented. In the first attack, we obtain multivariate equations of degree 10. Then, an algebraic attack is developed using a collection of output bits whose relation to the initial state of the LFSR can be described by low-degree equations. The resulting system of equations contains 2^69 equations and monomials, which can be solved using the Gaussian elimination with the complexity of 2^196.5. For the second attack, we build a multivariate equation of degree 14. We focus on the property of the equation that the monomials which are combined with output bit are linear. By applying the Berlekamp-Massey algorithm, we can obtain a system of linear equations and the initial states of the LFSR can be recovered. The complexity of attack is around O(2^100) with 2^92 keystream observations. The second algebraic attack is applicable to SOBER-t16 without stuttering. The attack takes around O(2^85) CPU clocks with 2^78 keystream observations.
Resumo:
Purpose Ethnographic studies of cyber attacks typically aim to explain a particular profile of attackers in qualitative terms. The purpose of this paper is to formalise some of the approaches to build a Cyber Attacker Model Profile (CAMP) that can be used to characterise and predict cyber attacks. Design/methodology/approach The paper builds a model using social and economic independent or predictive variables from several eastern European countries and benchmarks indicators of cybercrime within the Australian financial services system. Findings The paper found a very strong link between perceived corruption and GDP in two distinct groups of countries – corruption in Russia was closely linked to the GDP of Belarus, Moldova and Russia, while corruption in Lithuania was linked to GDP in Estonia, Latvia, Lithuania and Ukraine. At the same time corruption in Russia and Ukraine were also closely linked. These results support previous research that indicates a strong link between been legitimate economy and the black economy in many countries of Eastern Europe and the Baltic states. The results of the regression analysis suggest that a highly skilled workforce which is mobile and working in an environment of high perceived corruption in the target countries is related to increases in cybercrime even within Australia. It is important to note that the data used for the dependent and independent variables were gathered over a seven year time period, which included large economic shocks such as the global financial crisis. Originality/value This is the first paper to use a modelling approach to directly show the relationship between various social, economic and demographic factors in the Baltic states and Eastern Europe, and the level of card skimming and card not present fraud in Australia.
Resumo:
In this paper we analyse properties of the message expansion algorithm of SHA-1 and describe a method of finding differential patterns that may be used to attack reduced versions of SHA-1. We show that the problem of finding optimal differential patterns for SHA-1 is equivalent to the problem of finding minimal weight codeword in a large linear code. Finally, we present a number of patterns of different lengths suitable for finding collisions and near-collisions and discuss some bounds on minimal weights of them.
Resumo:
Multi-party key agreement protocols indirectly assume that each principal equally contributes to the final form of the key. In this paper we consider three malleability attacks on multi-party key agreement protocols. The first attack, called strong key control allows a dishonest principal (or a group of principals) to fix the key to a pre-set value. The second attack is weak key control in which the key is still random, but the set from which the key is drawn is much smaller than expected. The third attack is named selective key control in which a dishonest principal (or a group of dishonest principals) is able to remove a contribution of honest principals to the group key. The paper discusses the above three attacks on several key agreement protocols, including DH (Diffie-Hellman), BD (Burmester-Desmedt) and JV (Just-Vaudenay). We show that dishonest principals in all three protocols can weakly control the key, and the only protocol which does not allow for strong key control is the DH protocol. The BD and JV protocols permit to modify the group key by any pair of neighboring principals. This modification remains undetected by honest principals.
Resumo:
In 2006, Gaurav Gupta and Josef Pieprzyk presented an attack on the branch-based software watermarking scheme proposed by Ginger Myles and Hongxia Jin in 2005. The software watermarking model is based on replacing jump instructions or unconditional branch statements (UBS) by calls to a fingerprint branch function (FBF) that computes the correct target address of the UBS as a function of the generated fingerprint and integrity check. If the program is tampered with, the fingerprint and/or integrity checks change and the target address is not computed correctly. Gupta and Pieprzyk's attack uses debugger capabilities such as register and address lookup and breakpoints to minimize the requirement to manually inspect the software. Using these resources, the FBF and calls to the same is identified, correct displacement values are generated and calls to FBF are replaced by the original UBS transferring control of the attack to the correct target instruction. In this paper, we propose a watermarking model that provides security against such debugging attacks. Two primary measures taken are shifting the stack pointer modification operation from the FBF to the individual UBSs, and coding the stack pointer modification in the same language as that of the rest of the code rather than assembly language to avoid conspicuous contents. The manual component complexity increases from O(1) in the previous scheme to O(n) in our proposed scheme.
Resumo:
While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes.
Resumo:
Compression is desirable for network applications as it saves bandwidth; however, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to successful CRIME and BREACH attacks on web traffic protected by the Transport Layer Security (TLS) protocol. The general guidance in light of these attacks has been to disable compression, preserving confidentiality but sacrificing bandwidth. In this paper, we examine two techniques - heuristic separation of secrets and fixed-dictionary compression|for enabling compression while protecting high-value secrets, such as cookies, from attack. We model the security offered by these techniques and report on the amount of compressibility that they can achieve.