901 resultados para Speculative attacks
Resumo:
Reputation and proof-of-work systems have been outlined as methods bot masters will soon use to defend their peer-to-peer botnets. These techniques are designed to prevent sybil attacks, such as those that led to the downfall of the Storm botnet. To evaluate the effectiveness of these techniques, a botnet that employed these techniques was simulated, and the amount of resources required to stage a successful sybil attack against it measured. While the proof-of-work system was found to increase the resources required for a successful sybil attack, the reputation system was found to lower the amount of resources required to disable the botnet.
Resumo:
RFID has been widely used in today's commercial and supply chain industry, due to the significant advantages it offers and the relatively low production cost. However, this ubiquitous technology has inherent problems in security and privacy. This calls for the development of simple, efficient and cost effective mechanisms against a variety of security threats. This paper proposes a two-step authentication protocol based on the randomized hash-lock scheme proposed by S. Weis in 2003. By introducing additional measures during the authentication process, this new protocol proves to enhance the security of RFID significantly, and protects the passive tags from almost all major attacks, including tag cloning, replay, full-disclosure, tracking, and eavesdropping. Furthermore, no significant changes to the tags is required to implement this protocol, and the low complexity level of the randomized hash-lock algorithm is retained.
Resumo:
This thesis is devoted to the study of linear relationships in symmetric block ciphers. A block cipher is designed so that the ciphertext is produced as a nonlinear function of the plaintext and secret master key. However, linear relationships within the cipher can still exist if the texts and components of the cipher are manipulated in a number of ways, as shown in this thesis. There are four main contributions of this thesis. The first contribution is the extension of the applicability of integral attacks from word-based to bitbased block ciphers. Integral attacks exploit the linear relationship between texts at intermediate stages of encryption. This relationship can be used to recover subkey bits in a key recovery attack. In principle, integral attacks can be applied to bit-based block ciphers. However, specific tools to define the attack on these ciphers are not available. This problem is addressed in this thesis by introducing a refined set of notations to describe the attack. The bit patternbased integral attack is successfully demonstrated on reduced-round variants of the block ciphers Noekeon, Present and Serpent. The second contribution is the discovery of a very small system of equations that describe the LEX-AES stream cipher. LEX-AES is based heavily on the 128-bit-key (16-byte) Advanced Encryption Standard (AES) block cipher. In one instance, the system contains 21 equations and 17 unknown bytes. This is very close to the upper limit for an exhaustive key search, which is 16 bytes. One only needs to acquire 36 bytes of keystream to generate the equations. Therefore, the security of this cipher depends on the difficulty of solving this small system of equations. The third contribution is the proposal of an alternative method to measure diffusion in the linear transformation of Substitution-Permutation-Network (SPN) block ciphers. Currently, the branch number is widely used for this purpose. It is useful for estimating the possible success of differential and linear attacks on a particular SPN cipher. However, the measure does not give information on the number of input bits that are left unchanged by the transformation when producing the output bits. The new measure introduced in this thesis is intended to complement the current branch number technique. The measure is based on fixed points and simple linear relationships between the input and output words of the linear transformation. The measure represents the average fraction of input words to a linear diffusion transformation that are not effectively changed by the transformation. This measure is applied to the block ciphers AES, ARIA, Serpent and Present. It is shown that except for Serpent, the linear transformations used in the block ciphers examined do not behave as expected for a random linear transformation. The fourth contribution is the identification of linear paths in the nonlinear round function of the SMS4 block cipher. The SMS4 block cipher is used as a standard in the Chinese Wireless LAN Wired Authentication and Privacy Infrastructure (WAPI) and hence, the round function should exhibit a high level of nonlinearity. However, the findings in this thesis on the existence of linear relationships show that this is not the case. It is shown that in some exceptional cases, the first four rounds of SMS4 are effectively linear. In these cases, the effective number of rounds for SMS4 is reduced by four, from 32 to 28. The findings raise questions about the security provided by SMS4, and might provide clues on the existence of a flaw in the design of the cipher.
Resumo:
The loosely-coupled and dynamic nature of web services architectures has many benefits, but also leads to an increased vulnerability to denial of service attacks. While many papers have surveyed and described these vulnerabilities, they are often theoretical and lack experimental data to validate them, and assume an obsolete state of web services technologies. This paper describes experiments involving several denial of service vulnerabilities in well-known web services platforms, including Java Metro, Apache Axis, and Microsoft .NET. The results both confirm and deny the presence of some of the most well-known vulnerabilities in web services technologies. Specifically, major web services platforms appear to cope well with attacks that target memory exhaustion. However, attacks targeting CPU-time exhaustion are still effective, regardless of the victim’s platform.
Resumo:
This thesis is a work of creative practice-led research comprising two components. The first component is a speculative thriller novel, entitled Diamond Eyes. (Contracted for publication in 2009 by Harper Collins: Voyager as the first in a trilogy, under the name AA Bell.) The second component is an exegesis exploring the notion of re-visioning a novel. Re-visioning, not to be confused with revision, refers to advance editing strategies required when the original vision of a novel changes during development.
Resumo:
The material presented in this thesis may be viewed as comprising two key parts, the first part concerns batch cryptography specifically, whilst the second deals with how this form of cryptography may be applied to security related applications such as electronic cash for improving efficiency of the protocols. The objective of batch cryptography is to devise more efficient primitive cryptographic protocols. In general, these primitives make use of some property such as homomorphism to perform a computationally expensive operation on a collective input set. The idea is to amortise an expensive operation, such as modular exponentiation, over the input. Most of the research work in this field has concentrated on its employment as a batch verifier of digital signatures. It is shown that several new attacks may be launched against these published schemes as some weaknesses are exposed. Another common use of batch cryptography is the simultaneous generation of digital signatures. There is significantly less previous work on this area, and the present schemes have some limited use in practical applications. Several new batch signatures schemes are introduced that improve upon the existing techniques and some practical uses are illustrated. Electronic cash is a technology that demands complex protocols in order to furnish several security properties. These typically include anonymity, traceability of a double spender, and off-line payment features. Presently, the most efficient schemes make use of coin divisibility to withdraw one large financial amount that may be progressively spent with one or more merchants. Several new cash schemes are introduced here that make use of batch cryptography for improving the withdrawal, payment, and deposit of electronic coins. The devised schemes apply both to the batch signature and verification techniques introduced, demonstrating improved performance over the contemporary divisible based structures. The solutions also provide an alternative paradigm for the construction of electronic cash systems. Whilst electronic cash is used as the vehicle for demonstrating the relevance of batch cryptography to security related applications, the applicability of the techniques introduced extends well beyond this.
Resumo:
A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.
Resumo:
With the increasing threat of cyber and other attacks on critical infrastructure, governments throughout the world have been organizing industry to share information on possible threats. In Australia the Office of the Attorney General has formed Trusted Information Sharing Networks (TISN) for the various critical industries such as banking and electricity. Currently the majority of information for a TISN is shared at physical meetings. To meet cyber threats there are clearly limitations to physical meetings. Many of these limitations can be overcome by the creation of a virtual information sharing network (VISN). However there are many challenges to overcome in the design of a VISN both from a policy and technical viewpoint. We shall discuss some of these challenges in this talk.
Resumo:
The Denial of Service Testing Framework (dosTF) being developed as part of the joint India-Australia research project for ‘Protecting Critical Infrastructure from Denial of Service Attacks’ allows for the construction, monitoring and management of emulated Distributed Denial of Service attacks using modest hardware resources. The purpose of the testbed is to study the effectiveness of different DDoS mitigation strategies and to allow for the testing of defense appliances. Experiments are saved and edited in XML as abstract descriptions of an attack/defense strategy that is only mapped to real resources at run-time. It also provides a web-application portal interface that can start, stop and monitor an attack remotely. Rather than monitoring a service under attack indirectly, by observing traffic and general system parameters, monitoring of the target application is performed directly in real time via a customised SNMP agent.
Resumo:
Tracking/remote monitoring systems using GNSS are a proven method to enhance the safety and security of personnel and vehicles carrying precious or hazardous cargo. While GNSS tracking appears to mitigate some of these threats, if not adequately secured, it can be a double-edged sword allowing adversaries to obtain sensitive shipment and vehicle position data to better coordinate their attacks, and to provide a false sense of security to monitoring centers. Tracking systems must be designed with the ability to perform route-compliance and thwart attacks ranging from low-level attacks such as the cutting of antenna cables to medium and high-level attacks involving radio jamming and signal / data-level simulation, especially where the goods transported have a potentially high value to terrorists. This paper discusses the use of GNSS in critical tracking applications, addressing the mitigation of GNSS security issues, augmentation systems and communication systems in order to provide highly robust and survivable tracking systems.
Resumo:
This session is titled TRANSFORM! Opportunities and Challenges of Digital Content for Creative Economy. Some of the key concepts for this session include: 1. City / Economy 2. Creativity 3. Digital content 4. Transformation All of us would agree that these terms describe pertinent characteristics of contemporary world, the epithet of which is the ‘network era.’ I was thinking about what I would like to discuss here and what you, leading experts in divergent fields, would be interested to hear about. As the keynote for this session and as one of the first speakers for the entire conference, I see my role as an initiator for imagination, the wilder the better, posing questions rather than answers. Also given the session title Transform!, I wish to change this slightly to Transforming People, Place, and Technology: Towards Re-creative City in an attempt to take us away a little from the usual image depicted by the given topic. Instead, I intend to sketch a more holistic picture by reflecting on and extrapolating the four key concepts from the urban informatics point of view. To do so, I use ‘city’ as the primary guiding concept for my talk rather than probably more expected ‘digital media’ or ‘creative economy.’ You may wonder what I mean by re-creative city. I will explain this in time by looking at the key concepts from these four respective angles: 1. Living city 2. Creative city 3. Re-‐creative city 4. Opportunities and Challenges to arrive at a speculative yet probable image of the city that we may aspire to transform our current cities into. First let us start by considering the ‘living city.’
Resumo:
This silent swarm of stylized crickets is downloading data from Internet and catalogue searches being undertaken by the public at the State Library Queensland. These searches are being displayed on the screen on their backs. Each cricket downloads the searches and communicates this information with other crickets. Commonly found searches spread like a meme through the swarm. In this work memes replace the crickets’ song, washing like a wave through the swarm and changing on the whim of Internet users. When one cricket begins calling others, the swarm may respond to produce emergent patterns of text. When traffic is slow or of now interest to the crickets, they display onomatopoeia. The work is inspired by R. Murray Schafer’s research into acoustic ecologies. In the 1960’s Schafer proposed that many species develop calls that fit niches within their acoustic environment. An increasing background of white noise dominates the acoustic environment of urban human habitats, leaving few acoustic niches for other species to communicate. The popularity of headphones and portable music may be seen as an evolution of our acoustic ecology driven by our desire to hear expressive, meaningful sound, above the din of our cities. Similarly, the crickets in this work are hypothetical creatures that have evolved to survive in a noisy human environment. This speculative species replaces auditory calls with onomatopoeia and information memes, communicating with the swarm via radio frequency chirps instead of sound. Whilst these crickets cannot make sound, each individual has been programmed respond to sound generated by the audience, by making onomatopoeia calls in text. Try talking to a cricket, blowing on its tail, or making other sounds to trigger a call.
Resumo:
Client puzzles are meant to act as a defense against denial of service (DoS) attacks by requiring a client to solve some moderately hard problem before being granted access to a resource. However, recent client puzzle difficulty definitions (Stebila and Ustaoglu, 2009; Chen et al., 2009) do not ensure that solving n puzzles is n times harder than solving one puzzle. Motivated by examples of puzzles where this is the case, we present stronger definitions of difficulty for client puzzles that are meaningful in the context of adversaries with more computational power than required to solve a single puzzle. A protocol using strong client puzzles may still not be secure against DoS attacks if the puzzles are not used in a secure manner. We describe a security model for analyzing the DoS resistance of any protocol in the context of client puzzles and give a generic technique for combining any protocol with a strong client puzzle to obtain a DoS-resistant protocol.
Resumo:
In May 2008, xenophobic violence erupted in South Africa. The targets were individuals who had migrated from the north in search of asylum. Emerging first in township communities around Johannesburg, the aggression spread to other provinces. Sixty-two people died, and 100,000 (20,000 in the Western Cape alone) were displaced. As the attacks escalated across the country, thousands of migrants searched for refuge in police stations and churches. Chilling stories spread about mobs armed with axes, metal bars, and clubs. The mobs stormed from shack to shack, assaulted migrants, locked them in their homes, and set the homes on fire. The public reaction was one of shock and horror. The Los Angeles Times declared, “Migrants Burned Alive in S. Africa.” The South African president at the time, Thabo Mbeki, called for an end to “shameful and criminal attacks.” Commentators were stunned by the signs of hatred of foreigners (xenophobia) that emerged in the young South African democracy. The tragedy of the violence in South Africa was magnified by the fact that many of the victims had fled from violence and persecution in their countries of origin. Amid genocidal violations of human rights that had recently occurred in some countries in sub- Saharan Africa, the new South Africa stood as a beacon of democracy and respect for human dignity. With this openness in mind, many immigrants to South Africa sought safety and refuge from the conflicts in their homelands. More than 43,500 refugees and 227,000 asylum seekers now live in South Africa. The majority of people accorded refugee status came from Burundi, Democratic Republic of Congo, and Somalia. South Africa also hosts thousands of other migrants who remain undocumented.