832 resultados para role-based access control
Resumo:
This article presents a survey of authorisation models and considers their ‘fitness-for-purpose’ in facilitating information sharing. Network-supported information sharing is an important technical capability that underpins collaboration in support of dynamic and unpredictable activities such as emergency response, national security, infrastructure protection, supply chain integration and emerging business models based on the concept of a ‘virtual organisation’. The article argues that present authorisation models are inflexible and poorly scalable in such dynamic environments due to their assumption that the future needs of the system can be predicted, which in turn justifies the use of persistent authorisation policies. The article outlines the motivation and requirement for a new flexible authorisation model that addresses the needs of information sharing. It proposes that a flexible and scalable authorisation model must allow an explicit specification of the objectives of the system and access decisions must be made based on a late trade-off analysis between these explicit objectives. A research agenda for the proposed Objective-based Access Control concept is presented.
Resumo:
Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.
Resumo:
利用受限数据库为理论对访问请求、属性权威、策略和判定过程的抽象描述,给出了一个基于属性的访问控制模型,讨论了模型中访问请求、属性权威、策略和判定过程之间的关系,给出了一个访问控制判定过程可终止的一种特定条件。
Resumo:
Nowadays due to the security vulnerability of distributed systems, it is needed mechanisms to guarantee the security requirements of distributed objects communications. Middleware Platforms component integration platforms provide security functions that typically offer services for auditing, for guarantee messages protection, authentication, and access control. In order to support these functions, middleware platforms use digital certificates that are provided and managed by external entities. However, most middleware platforms do not define requirements to get, to maintain, to validate and to delegate digital certificates. In addition, most digital certification systems use X.509 certificates that are complex and have a lot of attributes. In order to address these problems, this work proposes a digital certification generic service for middleware platforms. This service provides flexibility via the joint use of public key certificates, to implement the authentication function, and attributes certificates to the authorization function. It also supports delegation. Certificate based access control is transparent for objects. The proposed service defines the digital certificate format, the store and retrieval system, certificate validation and support for delegation. In order to validate the proposed architecture, this work presents the implementation of the digital certification service for the CORBA middleware platform and a case study that illustrates the service functionalities
Resumo:
Las redes son la esencia de comunidades y sociedades humanas; constituyen el entramado en el que nos relacionamos y determinan cómo lo hacemos, cómo se disemina la información o incluso cómo las cosas se llevan a cabo. Pero el protagonismo de las redes va más allá del que adquiere en las redes sociales. Se encuentran en el seno de múltiples estructuras que conocemos, desde las interaciones entre las proteínas dentro de una célula hasta la interconexión de los routers de internet. Las redes sociales están presentes en internet desde sus principios, en el correo electrónico por tomar un ejemplo. Dentro de cada cliente de correo se manejan listas contactos que agregadas constituyen una red social. Sin embargo, ha sido con la aparición de los sitios web de redes sociales cuando este tipo de aplicaciones web han llegado a la conciencia general. Las redes sociales se han situado entre los sitios más populares y con más tráfico de la web. Páginas como Facebook o Twitter manejan cifras asombrosas en cuanto a número de usuarios activos, de tráfico o de tiempo invertido en el sitio. Pero las funcionalidades de red social no están restringidas a las redes sociales orientadas a contactos, aquellas enfocadas a construir tu lista de contactos e interactuar con ellos. Existen otros ejemplos de sitios que aprovechan las redes sociales para aumentar la actividad de los usuarios y su involucración alrededor de algún tipo de contenido. Estos ejemplos van desde una de las redes sociales más antiguas, Flickr, orientada al intercambio de fotografías, hasta Github, la red social de código libre más popular hoy en día. No es una casualidad que la popularidad de estos sitios web venga de la mano de sus funcionalidades de red social. El escenario es más rico aún, ya que los sitios de redes sociales interaccionan entre ellos, compartiendo y exportando listas de contactos, servicios de autenticación y proporcionando un valioso canal para publicitar la actividad de los usuarios en otros sitios web. Esta funcionalidad es reciente y aún les queda un paso hasta que las redes sociales superen su condición de bunkers y lleguen a un estado de verdadera interoperabilidad entre ellas, tal como funcionan hoy en día el correo electrónico o la mensajería instantánea. Este trabajo muestra una tecnología que permite construir sitios web con características de red social distribuída. En primer lugar, se presenta una tecnología para la construcción de un componente intermedio que permite proporcionar cualquier característica de gestión de contenidos al popular marco de desarrollo web modelo-vista-controlador (MVC) Ruby on Rails. Esta técnica constituye una herramienta para desarrolladores que les permita abstraerse de las complejidades de la gestión de contenidos y enfocarse en las particularidades de los propios contenidos. Esta técnica se usará también para proporcionar las características de red social. Se describe una nueva métrica de reusabilidad de código para demostrar la validez del componente intermedio en marcos MVC. En segundo lugar, se analizan las características de los sitios web de redes sociales más populares, con el objetivo de encontrar los patrones comunes que aparecen en ellos. Este análisis servirá como base para definir los requisitos que debe cumplir un marco para construir redes sociales. A continuación se propone una arquitectura de referencia que proporcione este tipo de características. Dicha arquitectura ha sido implementada en un componente, Social Stream, y probada en varias redes sociales, tanto orientadas a contactos como a contenido, en el contexto de una asociación vecinal tanto como en proyectos de investigación financiados por la UE. Ha sido la base de varios proyectos fin de carrera. Además, ha sido publicado como código libre, obteniendo una comunidad creciente y está siendo usado más allá del ámbito de este trabajo. Dicha arquitectura ha permitido la definición de un nuevo modelo de control de acceso social que supera varias limitaciones presentes en los modelos de control de acceso para redes sociales. Más aún, se han analizado casos de estudio de sitios de red social distribuídos, reuniendo un conjunto de caraterísticas que debe cumplir un marco para construir redes sociales distribuídas. Por último, se ha extendido la arquitectura del marco para dar cabida a las características de redes sociales distribuídas. Su implementación ha sido validada en proyectos de investigación financiados por la UE. Abstract Networks are the substance of human communities and societies; they constitute the structural framework on which we relate to each other and determine the way we do it, the way information is diseminated or even the way people get things done. But network prominence goes beyond the importance it acquires in social networks. Networks are found within numerous known structures, from protein interactions inside a cell to router connections on the internet. Social networks are present on the internet since its beginnings, in emails for example. Inside every email client, there are contact lists that added together constitute a social network. However, it has been with the emergence of social network sites (SNS) when these kinds of web applications have reached general awareness. SNS are now among the most popular sites in the web and with the higher traffic. Sites such as Facebook and Twitter hold astonishing figures of active users, traffic and time invested into the sites. Nevertheless, SNS functionalities are not restricted to contact-oriented social networks, those that are focused on building your own list of contacts and interacting with them. There are other examples of sites that leverage social networking to foster user activity and engagement around other types of content. Examples go from early SNS such as Flickr, the photography related networking site, to Github, the most popular social network repository nowadays. It is not an accident that the popularity of these websites comes hand-in-hand with their social network capabilities The scenario is even richer, due to the fact that SNS interact with each other, sharing and exporting contact lists and authentication as well as providing a valuable channel to publize user activity in other sites. These interactions are very recent and they are still finding their way to the point where SNS overcome their condition of data silos to a stage of full interoperability between sites, in the same way email and instant messaging networks work today. This work introduces a technology that allows to rapidly build any kind of distributed social network website. It first introduces a new technique to create middleware that can provide any kind of content management feature to a popular model-view-controller (MVC) web development framework, Ruby on Rails. It provides developers with tools that allow them to abstract from the complexities related with content management and focus on the development of specific content. This same technique is also used to provide the framework with social network features. Additionally, it describes a new metric of code reuse to assert the validity of the kind of middleware that is emerging in MVC frameworks. Secondly, the characteristics of top popular SNS are analysed in order to find the common patterns shown in them. This analysis is the ground for defining the requirements of a framework for building social network websites. Next, a reference architecture for supporting the features found in the analysis is proposed. This architecture has been implemented in a software component, called Social Stream, and tested in several social networks, both contact- and content-oriented, in local neighbourhood associations and EU-founded research projects. It has also been the ground for several Master’s theses. It has been released as a free and open source software that has obtained a growing community and that is now being used beyond the scope of this work. The social architecture has enabled the definition of a new social-based access control model that overcomes some of the limitations currenly present in access control models for social networks. Furthermore, paradigms and case studies in distributed SNS have been analysed, gathering a set of features for distributed social networking. Finally the architecture of the framework has been extended to support distributed SNS capabilities. Its implementation has also been validated in EU-founded research projects.
Resumo:
Secure Access For Everyone (SAFE), is an integrated system for managing trust
using a logic-based declarative language. Logical trust systems authorize each
request by constructing a proof from a context---a set of authenticated logic
statements representing credentials and policies issued by various principals
in a networked system. A key barrier to practical use of logical trust systems
is the problem of managing proof contexts: identifying, validating, and
assembling the credentials and policies that are relevant to each trust
decision.
SAFE addresses this challenge by (i) proposing a distributed authenticated data
repository for storing the credentials and policies; (ii) introducing a
programmable credential discovery and assembly layer that generates the
appropriate tailored context for a given request. The authenticated data
repository is built upon a scalable key-value store with its contents named by
secure identifiers and certified by the issuing principal. The SAFE language
provides scripting primitives to generate and organize logic sets representing
credentials and policies, materialize the logic sets as certificates, and link
them to reflect delegation patterns in the application. The authorizer fetches
the logic sets on demand, then validates and caches them locally for further
use. Upon each request, the authorizer constructs the tailored proof context
and provides it to the SAFE inference for certified validation.
Delegation-driven credential linking with certified data distribution provides
flexible and dynamic policy control enabling security and trust infrastructure
to be agile, while addressing the perennial problems related to today's
certificate infrastructure: automated credential discovery, scalable
revocation, and issuing credentials without relying on centralized authority.
We envision SAFE as a new foundation for building secure network systems. We
used SAFE to build secure services based on case studies drawn from practice:
(i) a secure name service resolver similar to DNS that resolves a name across
multi-domain federated systems; (ii) a secure proxy shim to delegate access
control decisions in a key-value store; (iii) an authorization module for a
networked infrastructure-as-a-service system with a federated trust structure
(NSF GENI initiative); and (iv) a secure cooperative data analytics service
that adheres to individual secrecy constraints while disclosing the data. We
present empirical evaluation based on these case studies and demonstrate that
SAFE supports a wide range of applications with low overhead.
Resumo:
9 p.
Resumo:
Both learning and basic biological mechanisms have been shown to play a role in the control of protein int^e. It has previously been shown that rats can adapt their dietary selection patterns successfully in the face of changing macronutrient requirements and availability. In particular, it has been demonstrated that when access to dietary protein is restricted for a period of time, rats selectively increase their consumption of a proteincontaining diet when it becomes available. Furthermore, it has been shown that animals are able to associate various orosensory cues with a food's nutrient content. In addition to the role that learning plays in food intake, there are also various biological mechanisms that have been shown to be involved in the control of feeding behaviour. Numerous studies have documented that various hormones and neurotransmitter substances mediate food intake. One such hormone is growth hormone-releasing factor (GRF), a peptide that induces the release of growth hormone (GH) from the anterior pituitary gland. Recent research by Vaccarino and Dickson ( 1 994) suggests that GRF may stimulate food intake by acting as a neurotransmitter in the suprachiasmatic nucleus (SCN) and the adjacent medial preoptic area (MPOA). In particular, when GRF is injected directly into the SCN/MPOA, it has been shown to selectively enhance the intake of protein in both fooddeprived and sated rats. Thus, GRF may play a role in activating protein consumption generally, and when animals have a need for protein, GRF may serve to trigger proteinseeking behaviour. Although researchers have separately examined the role of learning and the central mechanisms involved in the control of protein selection, no one has yet attempted to bring together these two lines of study. Thus, the purpose of this study is to join these two parallel lines of research in order to further our understanding of mechanisms controlling protein selection. In order to ascertain the combined effects that GRF and learning have on protein intake several hypothesis were examined. One major hypothesis was that rats would successfully alter their dietary selection patterns in response to protein restriction. It was speculated that rats kept on a nutritionally complete maintenance diet (NCMD) would consume equal amount of the intermittently presented high protein conditioning diet (HPCD) and protein-free conditioning diet (PFCD). However, it was hypothesized that rats kept on a protein-free maintenance diet (PFMD) would selectively increase their intake of the HPCD. Another hypothesis was that rats would learn to associate a distinct marker flavour with the nutritional content of the diets. If an animal is able to make the association between a marker flavour and the nutrient content of the food, then it is hypothesized that they will consume more of a mixed diet (equal portion HPCD and PFCD) with the marker flavour that was previously paired with the HPCD (Mixednp-f) when kept on the PFMD. In addition, it was hypothesized that intracranial injection of GRF into the SCN/MPOA would result in a selective increase in HPCD as well as Mixednp-t consumption. Results demonstrated that rats did in fact selectively increase their consumption of the flavoured HPCD and Mixednp-f when kept on the NCMD. These findings indicate that the rats successfully learned about the nutrient content of the conditioning diets and were able to associate a distinct marker flavour with the nutrient content of the diets. However, the results failed to support previous findings that GRF increases protein intake. In contrast, the administration of GRF significantly reduced consumption of HPCD during the first hour of testing as compared to the no injection condition. In addition, no differences in the intake of the HPCD were found between the GRF and vehicle condition. Because GRF did not selectively increase HPCD consumption, it was not surprising that GRF also did not increase MixedHP-rintake. What was interesting was that administration of GRF and vehicle did not reduc^Mixednp-f consumption as it had decreased HPCD consumption.
Resumo:
The BlackEnergy malware targeting critical infrastructures has a long history. It evolved over time from a simple DDoS platform to a quite sophisticated plug-in based malware. The plug-in architecture has a persistent malware core with easily installable attack specific modules for DDoS, spamming, info-stealing, remote access, boot-sector formatting etc. BlackEnergy has been involved in several high profile cyber physical attacks including the recent Ukraine power grid attack in December 2015. This paper investigates the evolution of BlackEnergy and its cyber attack capabilities. It presents a basic cyber attack model used by BlackEnergy for targeting industrial control systems. In particular, the paper analyzes cyber threats of BlackEnergy for synchrophasor based systems which are used for real-time control and monitoring functionalities in smart grid. Several BlackEnergy based attack scenarios have been investigated by exploiting the vulnerabilities in two widely used synchrophasor communication standards: (i) IEEE C37.118 and (ii) IEC 61850-90-5. Specifically, the paper addresses reconnaissance, DDoS, man-in-the-middle and replay/reflection attacks on IEEE C37.118 and IEC 61850-90-5. Further, the paper also investigates protection strategies for detection and prevention of BlackEnergy based cyber physical attacks.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
Call Level Interfaces (CLI) are low level API that play a key role in database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI were not designed to address organizational requirements and contextual runtime requirements. Among the examples we emphasize the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and also the need to automatically adapt to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). This paper presents the reference architecture for those components and a proof of concept based on Java and Java Database Connectivity (an example of CLI).
Resumo:
We define a semantic model for purpose, based on which purpose-based privacy policies can be meaningfully expressed and enforced in a business system. The model is based on the intuition that the purpose of an action is determined by its situation among other inter-related actions. Actions and their relationships can be modeled in the form of an action graph which is based on the business processes in a system. Accordingly, a modal logic and the corresponding model checking algorithm are developed for formal expression of purpose-based policies and verifying whether a particular system complies with them. It is also shown through various examples, how various typical purpose-based policies as well as some new policy types can be expressed and checked using our model.
Resumo:
Knowledge has been recognised as a powerful yet intangible asset, which is difficult to manage. This is especially true in a project environment where there is the potential to repeat mistakes, rather than learn from previous experiences. The literature in the project management field has recognised the importance of knowledge sharing (KS) within and between projects. However, studies in that field focus primarily on KS mechanisms including lessons learned (LL) and post project reviews as the source of knowledge for future projects, and only some preliminary research has been carried out on the aspects of project management offices (PMOs) and organisational culture (OC) in KS. This study undertook to investigate KS behaviours in an inter-project context, with a particular emphasis on the role of trust, OC and a range of knowledge sharing mechanisms (KSM) in achieving successful inter-project knowledge sharing (I-PKS). An extensive literature search resulted in the development of an I-PKS Framework, which defined the scope of the research and shaped its initial design. The literature review indicated that existing research relating to the three factors of OC, trust and KSM remains inadequate in its ability to fully explain the role of these contextual factors. In particular, the literature review identified these areas of interest: (1) the conflicting answers to some of the major questions related to KSM, (2) the limited empirical research on the role of different trust dimensions, (3) limited empirical evidence of the role of OC in KS, and (4) the insufficient research on KS in an inter-project context. The resulting Framework comprised the three main factors including: OC, trust and KSM, demonstrating a more integrated view of KS in the inter-project context. Accordingly, the aim of this research was to examine the relationships between these three factors and KS by investigating behaviours related to KS from the project managers‘ (PMs‘) perspective. In order to achieve the aim, this research sought to answer the following research questions: 1. How does organisational culture influence inter-project knowledge sharing? 2. How does the existence of three forms of trust — (i) ability, (ii) benevolence and (iii) integrity — influence inter-project knowledge sharing? 3. How can different knowledge sharing mechanisms (relational, project management tools and process, and technology) improve inter-project knowledge sharing behaviours? 4. How do the relationships between these three factors of organisational culture, trust and knowledge sharing mechanisms improve inter-project knowledge sharing? a. What are the relationships between the factors? b. What is the best fit for given cases to ensure more effective inter-project knowledge sharing? Using multiple case studies, this research was designed to build propositions emerging from cross-case data analysis. The four cases were chosen on the basis of theoretical sampling. All cases were large project-based organisations (PBOs), with a strong matrix-type structure, as per the typology proposed by the Project Management Body of Knowledge (PMBoK) (2008). Data were collected from project management departments of the respective organisations. A range of analytical techniques were used to deal with the data including pattern matching logic and explanation building analysis, complemented by the use of NVivo for data coding and management. Propositions generated at the end of the analyses were further compared with the extant literature, and practical implications based on the data and literature were suggested in order to improve I-PKS. Findings from this research conclude that OC, trust, and KSM contribute to inter-project knowledge sharing, and suggest the existence of relationships between these factors. In view of that, this research identified the relationships between different trust dimensions, suggesting that integrity trust reinforces the relationship between ability trust and knowledge sharing. Furthermore, this research demonstrated that characteristics of culture and trust interact to reinforce preferences for mechanisms of knowledge sharing. This means that cultures that facilitate characteristics of Clan type are more likely to result in trusting relationships, hence are more likely to use organic sources of knowledge for both tacit and explicit knowledge exchange. In contrast, cultures that are empirically driven, based on control, efficiency, and measures (characteristics of Hierarchy and Market types) display tendency to develop trust primarily in ability of non-organic sources, and therefore use these sources to share mainly explicit knowledge. This thesis contributes to the project management literature by providing a more integrative view of I-PKS, bringing the factors of OC, trust and KSM into the picture. A further contribution is related to the use of collaborative tools as a substitute for static LL databases and as a facilitator for tacit KS between geographically dispersed projects. This research adds to the literature on OC by providing rich empirical evidence of the relationships between OC and the willingness to share knowledge, and by providing empirical evidence that OC has an effect on trust; in doing so this research extends the theoretical propositions outlined by previous research. This study also extends the research on trust by identifying the relationships between different trust dimensions, suggesting that integrity trust reinforces the relationship between ability trust and KS. Finally, this research provides some directions for future studies.
Resumo:
IEEE 802.11 based wireless local area networks (WLANs) are being increasingly deployed for soft real-time control applications. However, they do not provide quality-ofservice (QoS) differentiation to meet the requirements of periodic real-time traffic flows, a unique feature of real-time control systems. This problem becomes evident particularly when the network is under congested conditions. Addressing this problem, a media access control (MAC) scheme, QoS-dif, is proposed in this paper to enable QoS differentiation in IEEE 802.11 networks for different types of periodic real-time traffic flows. It extends the IEEE 802.11e Enhanced Distributed Channel Access (EDCA) by introducing a QoS differentiation method to deal with different types of periodic traffic that have different QoS requirements for real-time control applications. The effectiveness of the proposed QoS-dif scheme is demonstrated through comparisons with the IEEE 802.11e EDCA mechanism.