480 resultados para Millennium (Computer system)
Resumo:
A Flash Event (FE) represents a period of time when a web-server experiences a dramatic increase in incoming traffic, either following a newsworthy event that has prompted users to locate and access it, or as a result of redirection from other popular web or social media sites. This usually leads to network congestion and Quality-of-Service (QoS) degradation. These events can be mistaken for Distributed Denial-of-Service (DDoS) attacks aimed at disrupting the server. Accurate detection of FEs and their distinction from DDoS attacks is important, since different actions need to be undertaken by network administrators in these two cases. However, lack of public domain FE datasets hinders research in this area. In this paper we present a detailed study of flash events and classify them into three broad categories. In addition, the paper describes FEs in terms of three key components: the volume of incoming traffic, the related source IP-addresses, and the resources being accessed. We present such a FE model with minimal parameters and use publicly available datasets to analyse and validate our proposed model. The model can be used to generate different types of FE traffic, closely approximating real-world scenarios, in order to facilitate research into distinguishing FEs from DDoS attacks.
Resumo:
This work-in-progress paper presents an ensemble-based model for detecting and mitigating Distributed Denial-of-Service (DDoS) attacks, and its partial implementation. The model utilises network traffic analysis and MIB (Management Information Base) server load analysis features for detecting a wide range of network and application layer DDoS attacks and distinguishing them from Flash Events. The proposed model will be evaluated against realistic synthetic network traffic generated using a software-based traffic generator that we have developed as part of this research. In this paper, we summarise our previous work, highlight the current work being undertaken along with preliminary results obtained and outline the future directions of our work.
Resumo:
The suitability of Role Based Access Control (RBAC) is being challenged in dynamic environments like healthcare. In an RBAC system, a user's legitimate access may be denied if their need has not been anticipated by the security administrator at the time of policy specification. Alternatively, even when the policy is correctly specified an authorised user may accidentally or intentionally misuse the granted permission. The heart of the challenge is the intrinsic unpredictability of users' operational needs as well as their incentives to misuse permissions. In this paper we propose a novel Budget-aware Role Based Access Control (B-RBAC) model that extends RBAC with the explicit notion of budget and cost, where users are assigned a limited budget through which they pay for the cost of permissions they need. We propose a model where the value of resources are explicitly defined and an RBAC policy is used as a reference point to discriminate the price of access permissions, as opposed to representing hard and fast rules for making access decisions. This approach has several desirable properties. It enables users to acquire unassigned permissions if they deem them necessary. However, users misuse capability is always bounded by their allocated budget and is further adjustable through the discrimination of permission prices. Finally, it provides a uniform mechanism for the detection and prevention of misuses.
Resumo:
Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.
Resumo:
Security indicators in web browsers alert users to the presence of a secure connection between their computer and a web server; many studies have shown that such indicators are largely ignored by users in general. In other areas of computer security, research has shown that technical expertise can decrease user susceptibility to attacks. In this work, we examine whether computer or security expertise affects use of web browser security indicators. Our study takes place in the context of web-based single sign-on, in which a user can use credentials from a single identity provider to login to many relying websites; single sign-on is a more complex, and hence more difficult, security task for users. In our study, we used eye trackers and surveyed participants to examine the cues individuals use and those they report using, respectively. Our results show that users with security expertise are more likely to self-report looking at security indicators, and eye-tracking data shows they have longer gaze duration at security indicators than those without security expertise. However, computer expertise alone is not correlated with recorded use of security indicators. In survey questions, neither experts nor novices demonstrate a good understanding of the security consequences of web-based single sign-on.
Resumo:
This paper describes in detail our Security-Critical Program Analyser (SCPA). SCPA is used to assess the security of a given program based on its design or source code with regard to data flow-based metrics. Furthermore, it allows software developers to generate a UML-like class diagram of their program and annotate its confidential classes, methods and attributes. SCPA is also capable of producing Java source code for the generated design of a given program. This source code can then be compiled and the resulting Java bytecode program can be used by the tool to assess the program's overall security based on our security metrics.
Resumo:
Refactoring is a common approach to producing better quality software. Its impact on many software quality properties, including reusability, maintainability and performance, has been studied and measured extensively. However, its impact on the information security of programs has received relatively little attention. In this work, we assess the impact of a number of the most common code-level refactoring rules on data security, using security metrics that are capable of measuring security from the viewpoint of potential information flow. The metrics are calculated for a given Java program using a static analysis tool we have developed to automatically analyse compiled Java bytecode. We ran our Java code analyser on various programs which were refactored according to each rule. New values of the metrics for the refactored programs then confirmed that the code changes had a measurable effect on information security.
Resumo:
In this paper we demonstrate how to monitor a smartphone running Symbian operating system and Windows Mobile in order to extract features for anomaly detection. These features are sent to a remote server because running a complex intrusion detection system on this kind of mobile device still is not feasible due to capability and hardware limitations. We give examples on how to compute relevant features and introduce the top ten applications used by mobile phone users based on a study in 2005. The usage of these applications is recorded by a monitoring client and visualized. Additionally, monitoring results of public and self-written malwares are shown. For improving monitoring client performance, Principal Component Analysis was applied which lead to a decrease of about 80 of the amount of monitored features.
Resumo:
NeSSi (network security simulator) is a novel network simulation tool which incorporates a variety of features relevant to network security distinguishing it from general-purpose network simulators. Its capabilities such as profile-based automated attack generation, traffic analysis and support for detection algorithm plug-ins allow it to be used for security research and evaluation purposes. NeSSi has been successfully used for testing intrusion detection algorithms, conducting network security analysis and developing overlay security frameworks. NeSSi is built upon the agent framework JIAC, resulting in a distributed and extensible architecture. In this paper, we provide an overview of the NeSSi architecture as well as its distinguishing features and briefly demonstrate its application to current security research projects.
Resumo:
Due to increased complexity, scale, and functionality of information and telecommunication (IT) infrastructures, every day new exploits and vulnerabilities are discovered. These vulnerabilities are most of the time used by ma¬licious people to penetrate these IT infrastructures for mainly disrupting business or stealing intellectual pro¬perties. Current incidents prove that it is not sufficient anymore to perform manual security tests of the IT infra¬structure based on sporadic security audits. Instead net¬works should be continuously tested against possible attacks. In this paper we present current results and challenges towards realizing automated and scalable solutions to identify possible attack scenarios in an IT in¬frastructure. Namely, we define an extensible frame¬work which uses public vulnerability databases to identify pro¬bable multi-step attacks in an IT infrastructure, and pro¬vide recommendations in the form of patching strategies, topology changes, and configuration updates.
Resumo:
This chapter presents a comparative survey of recent key management (key distribution, discovery, establishment and update) solutions for wireless sensor networks. We consider both distributed and hierarchical sensor network architectures where unicast, multicast and broadcast types of communication take place. Probabilistic, deterministic and hybrid key management solutions are presented, and we determine a set of metrics to quantify their security properties and resource usage such as processing, storage and communication overheads. We provide a taxonomy of solutions, and identify trade-offs in these schemes to conclude that there is no one-size-fits-all solution.
Resumo:
Collaborative methods are promising tools for solving complex security tasks. In this context, the authors present the security overlay framework CIMD (Collaborative Intrusion and Malware Detection), enabling participants to state objectives and interests for joint intrusion detection and find groups for the exchange of security-related data such as monitoring or detection results accordingly; to these groups the authors refer as detection groups. First, the authors present and discuss a tree-oriented taxonomy for the representation of nodes within the collaboration model. Second, they introduce and evaluate an algorithm for the formation of detection groups. After conducting a vulnerability analysis of the system, the authors demonstrate the validity of CIMD by examining two different scenarios inspired sociology where the collaboration is advantageous compared to the non-collaborative approach. They evaluate the benefit of CIMD by simulation in a novel packet-level simulation environment called NeSSi (Network Security Simulator) and give a probabilistic analysis for the scenarios.
Resumo:
Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pairwise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighboring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools.
Resumo:
A5/1 is a shift register based stream cipher which uses a majority clocking rule to update its registers. It is designed to provide privacy for the GSM system. In this paper, we analyse the initialisation process of A5/1. We demonstrate a sliding property of the A5/1 cipher, where every valid internal state is also a legitimate loaded state and multiple key-IV pairs produce phase shifted keystream sequences. We describe a possible ciphertext only attack based on this property.
Resumo:
Secure communications between large number of sensor nodes that are randomly scattered over a hostile territory, necessitate efficient key distribution schemes. However, due to limited resources at sensor nodes such schemes cannot be based on post deployment computations. Instead, pairwise (symmetric) keys are required to be pre-distributed by assigning a list of keys, (a.k.a. key-chain), to each sensor node. If a pair of nodes does not have a common key after deployment then they must find a key-path with secured links. The objective is to minimize the keychain size while (i) maximizing pairwise key sharing probability and resilience, and (ii) minimizing average key-path length. This paper presents a deterministic key distribution scheme based on Expander Graphs. It shows how to map the parameters (e.g., degree, expansion, and diameter) of a Ramanujan Expander Graph to the desired properties of a key distribution scheme for a physical network topology.