172 resultados para Hash
Resumo:
Denial-of-service (DoS) attacks are a growing concern to networked services like the Internet. In recent years, major Internet e-commerce and government sites have been disabled due to various DoS attacks. A common form of DoS attack is a resource depletion attack, in which an attacker tries to overload the server's resources, such as memory or computational power, rendering the server unable to service honest clients. A promising way to deal with this problem is for a defending server to identify and segregate malicious traffic as earlier as possible. Client puzzles, also known as proofs of work, have been shown to be a promising tool to thwart DoS attacks in network protocols, particularly in authentication protocols. In this thesis, we design efficient client puzzles and propose a stronger security model to analyse client puzzles. We revisit a few key establishment protocols to analyse their DoS resilient properties and strengthen them using existing and novel techniques. Our contributions in the thesis are manifold. We propose an efficient client puzzle that enjoys its security in the standard model under new computational assumptions. Assuming the presence of powerful DoS attackers, we find a weakness in the most recent security model proposed to analyse client puzzles and this study leads us to introduce a better security model for analysing client puzzles. We demonstrate the utility of our new security definitions by including two hash based stronger client puzzles. We also show that using stronger client puzzles any protocol can be converted into a provably secure DoS resilient key exchange protocol. In other contributions, we analyse DoS resilient properties of network protocols such as Just Fast Keying (JFK) and Transport Layer Security (TLS). In the JFK protocol, we identify a new DoS attack by applying Meadows' cost based framework to analyse DoS resilient properties. We also prove that the original security claim of JFK does not hold. Then we combine an existing technique to reduce the server cost and prove that the new variant of JFK achieves perfect forward secrecy (the property not achieved by original JFK protocol) and secure under the original security assumptions of JFK. Finally, we introduce a novel cost shifting technique which reduces the computation cost of the server significantly and employ the technique in the most important network protocol, TLS, to analyse the security of the resultant protocol. We also observe that the cost shifting technique can be incorporated in any Diffine{Hellman based key exchange protocol to reduce the Diffie{Hellman exponential cost of a party by one multiplication and one addition.
Resumo:
A key derivation function is used to generate one or more cryptographic keys from a private (secret) input value. This paper proposes a new method for constructing a generic stream cipher based key derivation function. We show that our proposed key derivation function based on stream ciphers is secure if the underlying stream cipher is secure. We simulate instances of this stream cipher based key derivation function using three eStream finalist: Trivium, Sosemanuk and Rabbit. The simulation results show these stream cipher based key derivation functions offer efficiency advantages over the more commonly used key derivation functions based on block ciphers and hash functions.
Resumo:
Basing signature schemes on strong lattice problems has been a long standing open issue. Today, two families of lattice-based signature schemes are known: the ones based on the hash-and-sign construction of Gentry et al.; and Lyubashevsky’s schemes, which are based on the Fiat-Shamir framework. In this paper we show for the first time how to adapt the schemes of Lyubashevsky to the ring signature setting. In particular we transform the scheme of ASIACRYPT 2009 into a ring signature scheme that provides strong properties of security under the random oracle model. Anonymity is ensured in the sense that signatures of different users are within negligible statistical distance even under full key exposure. In fact, the scheme satisfies a notion which is stronger than the classical full key exposure setting as even if the keypair of the signing user is adversarially chosen, the statistical distance between signatures of different users remains negligible. Considering unforgeability, the best lattice-based ring signature schemes provide either unforgeability against arbitrary chosen subring attacks or insider corruption in log-sized rings. In this paper we present two variants of our scheme. In the basic one, unforgeability is ensured in those two settings. Increasing signature and key sizes by a factor k (typically 80 − 100), we provide a variant in which unforgeability is ensured against insider corruption attacks for arbitrary rings. The technique used is pretty general and can be adapted to other existing schemes.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Numeric set watermarking is a way to provide ownership proof for numerical data. Numerical data can be considered to be primitives for multimedia types such as images and videos since they are organized forms of numeric information. Thereby, the capability to watermark numerical data directly implies the capability to watermark multimedia objects and discourage information theft on social networking sites and the Internet in general. Unfortunately, there has been very limited research done in the field of numeric set watermarking due to underlying limitations in terms of number of items in the set and LSBs in each item available for watermarking. In 2009, Gupta et al. proposed a numeric set watermarking model that embeds watermark bits in the items of the set based on a hash value of the items’ most significant bits (MSBs). If an item is chosen for watermarking, a watermark bit is embedded in the least significant bits, and the replaced bit is inserted in the fractional value to provide reversibility. The authors show their scheme to be resilient against the traditional subset addition, deletion, and modification attacks as well as secondary watermarking attacks. In this paper, we present a bucket attack on this watermarking model. The attack consists of creating buckets of items with the same MSBs and determine if the items of the bucket carry watermark bits. Experimental results show that the bucket attack is very strong and destroys the entire watermark with close to 100% success rate. We examine the inherent weaknesses in the watermarking model of Gupta et al. that leave it vulnerable to the bucket attack and propose potential safeguards that can provide resilience against this attack.
Resumo:
We show that the LASH-x hash function is vulnerable to attacks that trade time for memory, including collision attacks as fast as 2(4x/11) and preimage attacks as fast as 2(4x/7). Moreover, we briefly mention heuristic lattice based collision attacks that use small memory but require very long messages that are expected to find collisions much faster than 2 x/2. All of these attacks exploit the designers’ choice of an all zero IV. We then consider whether LASH can be patched simply by changing the IV. In this case, we show that LASH is vulnerable to a 2(7x/8) preimage attack. We also show that LASH is trivially not a PRF when any subset of input bytes is used as a secret key. None of our attacks depend upon the particular contents of the LASH matrix – we only assume that the distribution of elements is more or less uniform.
Resumo:
In this paper we investigate the differential properties of block ciphers in hash function modes of operation. First we show the impact of differential trails for block ciphers on collision attacks for various hash function constructions based on block ciphers. Further, we prove the lower bound for finding a pair that follows some truncated differential in case of a random permutation. Then we present open-key differential distinguishers for some well known round-reduced block ciphers.
Resumo:
In this article, we study the security of the IDEA block cipher when it is used in various simple-length or double-length hashing modes. Even though this cipher is still considered as secure, we show that one should avoid its use as internal primitive for block cipher based hashing. In particular, we are able to generate instantaneously free-start collisions for most modes, and even semi-free-start collisions, pseudo-preimages or hash collisions in practical complexity. This work shows a practical example of the gap that exists between secret-key and known or chosen-key security for block ciphers. Moreover, we also settle the 20-year-old standing open question concerning the security of the Abreast-DM and Tandem-DM double-length compression functions, originally invented to be instantiated with IDEA. Our attacks have been verified experimentally and work even for strengthened versions of IDEA with any number of rounds.
Resumo:
A key derivation function (KDF) is a function that transforms secret non-uniformly random source material together with some public strings into one or more cryptographic keys. These cryptographic keys are used with a cryptographic algorithm for protecting electronic data during both transmission over insecure channels and storage. In this thesis, we propose a new method for constructing a generic stream cipher based key derivation function. We show that our proposed key derivation function based on stream ciphers is secure if the under-lying stream cipher is secure. We simulate instances of this stream cipher based key derivation function using three eStream nalist: Trivium, Sosemanuk and Rabbit. The simulation results show these stream cipher based key derivation functions offer efficiency advantages over the more commonly used key derivation functions based on block ciphers and hash functions.
Resumo:
Security protocols are designed in order to provide security properties (goals). They achieve their goals using cryptographic primitives such as key agreement or hash functions. Security analysis tools are used in order to verify whether a security protocol achieves its goals or not. The analysed property by specific purpose tools are predefined properties such as secrecy (confidentiality), authentication or non-repudiation. There are security goals that are defined by the user in systems with security requirements. Analysis of these properties is possible with general purpose analysis tools such as coloured petri nets (CPN). This research analyses two security properties that are defined in a protocol that is based on trusted platform module (TPM). The analysed protocol is proposed by Delaune to use TPM capabilities and secrets in order to open only one secret from two submitted secrets to a recipient
Resumo:
Social media is playing an ever-increasing role in both viewers engagement with television and in the television industries evaluation of programming, in Australia – which is the focus of our study - and beyond. Twitter hashtags and viewer comments are increasingly incorporated into broadcasts, while Facebook fan pages provide a means of marketing upcoming shows and television personalities directly into the social media feed of millions of users. Additionally, bespoke applications such as FanGo and ZeeBox, which interact with the mainstream social networks, are increasingly being utilized by broadcasters for interactive elements of programming (c.f. Harrington, Highfield and Bruns, 2012). However, both the academic and industry study of these platforms has focused on the measure of content during the specific broadcast of the show, or a period surrounding it (e.g. 3 hours before until 3 am the next day, in the case of 2013 Nielsen SocialGuide reports). In this paper, we argue that this focus ignores a significant period for both television producers and advertisers; the lead-up to the program. If, as we argue elsewhere (Bruns, Woodford, Highfield & Prowd, forthcoming), users are persuaded to engage with content both by advertising of the Twitter hash-tag or Facebook page and by observing their network connections engaging with such content, the period before and between shows may have a significant impact on a viewers likelihood to watch a show. The significance of this period for broadcasters is clearly highlighted by the efforts they afford to advertising forthcoming shows through several channels, including television and social media, but also more widely. Biltereyst (2004, p.123) has argued that reality television generates controversy to receive media attention, and our previous small-scale work on reality shows during 2013 and 2014 supports the theory that promoting controversial behavior is likely to lead to increased viewing (Woodford & Prowd, 2014a). It remains unclear, however, to what extent this applies to other television genres. Similarly, while networks use of social media has been increasing, best practices remain unclear. Thus, by applying our telemetrics, that is social media metrics for television based on sabermetric approaches (Woodford, Prowd & Bruns, forthcoming; c.f. Woodford & Prowd, 2014b), to the period between shows, we are able to better understand the period when key viewing decisions may be made, to establish the significance of observing discussions within your network during the period between shows, and identify best practice examples of promoting a show using social media.
Resumo:
In this paper we tackle the problem of finding an efficient signature verification scheme when the number of signatures is signi.- cantly large and the verifier is relatively weak. In particular, we tackle the problem of message authentication in many-to-one communication networks known as concast communication. The paper presents three signature screening algorithms for a variant of ElGamal-type digital signatures. The cost for these schemes is n applications of hash functions, 2n modular multiplications, and n modular additions plus the verification of one digital signature, where n is the number of signatures. The paper also presents a solution to the open problem of finding a fast screening signature for non-RSA digital signature schemes.
Resumo:
In a traditional anti-jamming system a transmitter who wants to send a signal to a single receiver spreads the signal power over a wide frequency spectrum with the aim of stopping a jammer from blocking the transmission. In this paper, we consider the case that there are multiple receivers and the transmitter wants to broadcast a message to all receivers such that colluding groups of receivers cannot jam the reception of any other receiver. We propose efficient coding methods that achieve this goal and link this problem to well-known problems in combinatorics. We also link a generalisation of this problem to the Key Distribution Pattern problem studied in combinatorial cryptography.
Resumo:
In this paper we analyse the role of some of the building blocks of SHA-256. We show that the disturbance-correction strategy is applicable to the SHA-256 architecture and we prove that functions Σ, σ are vital for the security of SHA-256 by showing that for a variant without them it is possible to find collisions with complexity 2^64 hash operations. As a step towards an analysis of the full function, we present the results of our experiments on Hamming weights of expanded messages for different variants of the message expansion and show that there exist low-weight expanded messages for XOR-linearised variants.
Resumo:
In this paper we present a cryptanalysis of a new 256-bit hash function, FORK-256, proposed by Hong et al. at FSE 2006. This cryptanalysis is based on some unexpected differentials existing for the step transformation. We show their possible uses in different attack scenarios by giving a 1-bit (resp. 2-bit) near collision attack against the full compression function of FORK-256 running with complexity of 2^125 (resp. 2^120) and with negligible memory, and by exhibiting a 22-bit near pseudo-collision. We also show that we can find collisions for the full compression function with a small amount of memory with complexity not exceeding 2^126.6 hash evaluations. We further show how to reduce this complexity to 2^109.6 hash computations by using 273 memory words. Finally, we show that this attack can be extended with no additional cost to find collisions for the full hash function, i.e. with the predefined IV.