920 resultados para Computer System Management
Resumo:
Delegation, from a technical point of view, is widely considered as a potential approach in addressing the problem of providing dynamic access control decisions in activities with a high level of collaboration, either within a single security domain or across multiple security domains. Although delegation continues to attract significant attention from the research community, presently, there is no published work that presents a taxonomy of delegation concepts and models. This article intends to address this gap by presenting a set of taxonomic criteria relevant to the concept of delegation. This article also applies the taxonomy to a selection of significant delegation models published in the literature.
Resumo:
Delegation is a powerful mechanism to provide flexible and dynamic access control decisions. Delegation is particularly useful in federated environments where multiple systems, with their own security autonomy, are connected under one common federation. Although many delegation schemes have been studied, current models do not seriously take into account the issue of delegation commitment of the involved parties. In order to address this issue, this paper introduces a new mechanism to help parties involved in the delegation process to express commitment constraints, perform the commitments and track the committed actions. This mechanism looks at two different aspects: pre-delegation commitment and post-delegation commitment. In pre-delegation commitment, this mechanism enables the involved parties to express the delegation constraints and address those constraints. The post-delegation commitment phase enables those parties to inform the delegator and service providers how the commitments are conducted. This mechanism utilises a modified SAML assertion structure to support the proposed delegation and constraint approach.
Resumo:
Client puzzles are meant to act as a defense against denial of service (DoS) attacks by requiring a client to solve some moderately hard problem before being granted access to a resource. However, recent client puzzle difficulty definitions (Stebila and Ustaoglu, 2009; Chen et al., 2009) do not ensure that solving n puzzles is n times harder than solving one puzzle. Motivated by examples of puzzles where this is the case, we present stronger definitions of difficulty for client puzzles that are meaningful in the context of adversaries with more computational power than required to solve a single puzzle. A protocol using strong client puzzles may still not be secure against DoS attacks if the puzzles are not used in a secure manner. We describe a security model for analyzing the DoS resistance of any protocol in the context of client puzzles and give a generic technique for combining any protocol with a strong client puzzle to obtain a DoS-resistant protocol.
Resumo:
We define a semantic model for purpose, based on which purpose-based privacy policies can be meaningfully expressed and enforced in a business system. The model is based on the intuition that the purpose of an action is determined by its situation among other inter-related actions. Actions and their relationships can be modeled in the form of an action graph which is based on the business processes in a system. Accordingly, a modal logic and the corresponding model checking algorithm are developed for formal expression of purpose-based policies and verifying whether a particular system complies with them. It is also shown through various examples, how various typical purpose-based policies as well as some new policy types can be expressed and checked using our model.
Resumo:
The Katz and Kahn (1978) motivational framework is an open system management theory that underscores the importance of self-regulation while stressing the significance of using continuous feedback to adapt in a rapidly changing environment. This study aims to examine Katz and Kahn’s prepositions that the implementation of a system of rule compliance, external rewards, and internalized motivation can decrease employee turnover, increase quantitative and qualitative standards of performance, and enhance cooperation and creativeness. The results among 233 Chinese employees (96.6% response rate) indicated partial support for Katz and Kahn’s motivational framework. The implication for improving the Chinese workforce, in particular blue-collar occupations, is discussed.
Resumo:
In today's technological age, fraud has become more complicated, and increasingly more difficult to detect, especially when it is collusive in nature. Different fraud surveys showed that the median loss from collusive fraud is much greater than fraud perpetrated by a single person. Despite its prevalence and potentially devastating effects, collusion is commonly overlooked as an organizational risk. Internal auditors often fail to proactively consider collusion in their fraud assessment and detection efforts. In this paper, we consider fraud scenarios with collusion. We present six potentially collusive fraudulent behaviors and show their detection process in an ERP system. We have enhanced our fraud detection framework to utilize aggregation of different sources of logs in order to detect communication and have further enhanced it to render it system-agnostic thus achieving portability and making it generally applicable to all ERP systems.
Resumo:
As organizations reach to higher levels of business process management maturity, they often find themselves maintaining repositories of hundreds or even thousands of process models, representing valuable knowledge about their operations. Over time, process model repositories tend to accumulate duplicate fragments (also called clones) as new process models are created or extended by copying and merging fragments from other models. This calls for methods to detect clones in process models, so that these clones can be refactored as separate subprocesses in order to improve maintainability. This paper presents an indexing structure to support the fast detection of clones in large process model repositories. The proposed index is based on a novel combination of a method for process model decomposition (specifically the Refined Process Structure Tree), with established graph canonization and string matching techniques. Experiments show that the algorithm scales to repositories with hundreds of models. The experimental results also show that a significant number of non-trivial clones can be found in process model repositories taken from industrial practice.
Resumo:
The ultimate goal of an authorisation system is to allocate each user the level of access they need to complete their job - no more and no less. This proves to be challenging in an organisational setting because on one hand employees need enough access to perform their tasks, while on the other hand more access will bring about an increasing risk of misuse - either intentionally, where an employee uses the access for personal benefit, or unintentionally through carelessness, losing the information or being socially engineered to give access to an adversary. With the goal of developing a more dynamic authorisation model, we have adopted a game theoretic framework to reason about the factors that may affect users’ likelihood to misuse a permission at the time of an access decision. Game theory provides a useful but previously ignored perspective in authorisation theory: the notion of the user as a self-interested player who selects among a range of possible actions depending on their pay-offs.
Resumo:
Digital forensic examiners often need to identify the type of a file or file fragment based only on the content of the file. Content-based file type identification schemes typically use a byte frequency distribution with statistical machine learning to classify file types. Most algorithms analyze the entire file content to obtain the byte frequency distribution, a technique that is inefficient and time consuming. This paper proposes two techniques for reducing the classification time. The first technique selects a subset of features based on the frequency of occurrence. The second speeds classification by sampling several blocks from the file. Experimental results demonstrate that up to a fifteen-fold reduction in file size analysis time can be achieved with limited impact on accuracy.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.
Resumo:
Analyzing security protocols is an ongoing research in the last years. Different types of tools are developed to make the analysis process more precise, fast and easy. These tools consider security protocols as black boxes that can not easily be composed. It is difficult or impossible to do a low-level analysis or combine different tools with each other using these tools. This research uses Coloured Petri Nets (CPN) to analyze OSAP trusted computing protocol. The OSAP protocol is modeled in different levels and it is analyzed using state space method. The produced model can be combined with other trusted computing protocols in future works.
Resumo:
We investigate known security flaws in the context of security ceremonies to gain an understanding of the ceremony analysis process. The term security ceremonies is used to describe a system of protocols and humans which interact for a specific purpose. Security ceremonies and ceremony analysis is an area of research in its infancy, and we explore the basic principles involved to better understand the issues involved.We analyse three ceremonies, HTTPS, EMV and Opera Mini, and use the information gained from the experience to establish a list of typical flaws in ceremonies. Finally, we use that list to analyse a protocol proven secure for human use. This leads to a realisation of the strengths and weaknesses of ceremony analysis.
Resumo:
We present an automated verification method for security of Diffie–Hellman–based key exchange protocols. The method includes a Hoare-style logic and syntactic checking. The method is applied to protocols in a simplified version of the Bellare–Rogaway–Pointcheval model (2000). The security of the protocol in the complete model can be established automatically by a modular proof technique of Kudla and Paterson (2005).
Resumo:
Bana et al. proposed the relation formal indistinguishability (FIR), i.e. an equivalence between two terms built from an abstract algebra. Later Ene et al. extended it to cover active adversaries and random oracles. This notion enables a framework to verify computational indistinguishability while still offering the simplicity and formality of symbolic methods. We are in the process of making an automated tool for checking FIR between two terms. First, we extend the work by Ene et al. further, by covering ordered sorts and simplifying the way to cope with random oracles. Second, we investigate the possibility of combining algebras together, since it makes the tool scalable and able to cover a wide class of cryptographic schemes. Specially, we show that the combined algebra is still computationally sound, as long as each algebra is sound. Third, we design some proving strategies and implement the tool. Basically, the strategies allow us to find a sequence of intermediate terms, which are formally indistinguishable, between two given terms. FIR between the two given terms is then guaranteed by the transitivity of FIR. Finally, we show applications of the work, e.g. on key exchanges and encryption schemes. In the future, the tool should be extended easily to cover many schemes. This work continues previous research of ours on use of compilers to aid in automated proofs for key exchange.