873 resultados para Computer Security, Access Control, Distributed Computing, Object Oriented Systems
Resumo:
Fieldbus communication networks aim to interconnect sensors, actuators and controllers within process control applications. Therefore, they constitute the foundation upon which real-time distributed computer-controlled systems can be implemented. P-NET is a fieldbus communication standard, which uses a virtual token-passing medium-access-control mechanism. In this paper pre-run-time schedulability conditions for supporting real-time traffic with P-NET networks are established. Essentially, formulae to evaluate the upper bound of the end-to-end communication delay in P-NET messages are provided. Using this upper bound, a feasibility test is then provided to check the timing requirements for accessing remote process variables. This paper also shows how P-NET network segmentation can significantly reduce the end-to-end communication delays for messages with stringent timing requirements.
Resumo:
In this paper, we analyse the ability of P-NET [1] fieldbus to cope with the timing requirements of a Distributed Computer Control System (DCCS), where messages associated to discrete events should be made available within a maximum bound time. The main objective of this work is to analyse how the network access and queueing delays, imposed by P-NET’s virtual token Medium Access Control (MAC) mechanism, affect the realtime behaviour of the supported DCCS.
Resumo:
Die ubiquitäre Datenverarbeitung ist ein attraktives Forschungsgebiet des vergangenen und aktuellen Jahrzehnts. Es handelt von unaufdringlicher Unterstützung von Menschen in ihren alltäglichen Aufgaben durch Rechner. Diese Unterstützung wird durch die Allgegenwärtigkeit von Rechnern ermöglicht die sich spontan zu verteilten Kommunikationsnetzwerken zusammen finden, um Informationen auszutauschen und zu verarbeiten. Umgebende Intelligenz ist eine Anwendung der ubiquitären Datenverarbeitung und eine strategische Forschungsrichtung der Information Society Technology der Europäischen Union. Das Ziel der umbebenden Intelligenz ist komfortableres und sichereres Leben. Verteilte Kommunikationsnetzwerke für die ubiquitäre Datenverarbeitung charakterisieren sich durch Heterogenität der verwendeten Rechner. Diese reichen von Kleinstrechnern, eingebettet in Gegenstände des täglichen Gebrauchs, bis hin zu leistungsfähigen Großrechnern. Die Rechner verbinden sich spontan über kabellose Netzwerktechnologien wie wireless local area networks (WLAN), Bluetooth, oder UMTS. Die Heterogenität verkompliziert die Entwicklung und den Aufbau von verteilten Kommunikationsnetzwerken. Middleware ist eine Software Technologie um Komplexität durch Abstraktion zu einer homogenen Schicht zu reduzieren. Middleware bietet eine einheitliche Sicht auf die durch sie abstrahierten Ressourcen, Funktionalitäten, und Rechner. Verteilte Kommunikationsnetzwerke für die ubiquitäre Datenverarbeitung sind durch die spontane Verbindung von Rechnern gekennzeichnet. Klassische Middleware geht davon aus, dass Rechner dauerhaft miteinander in Kommunikationsbeziehungen stehen. Das Konzept der dienstorienterten Architektur ermöglicht die Entwicklung von Middleware die auch spontane Verbindungen zwischen Rechnern erlaubt. Die Funktionalität von Middleware ist dabei durch Dienste realisiert, die unabhängige Software-Einheiten darstellen. Das Wireless World Research Forum beschreibt Dienste die zukünftige Middleware beinhalten sollte. Diese Dienste werden von einer Ausführungsumgebung beherbergt. Jedoch gibt es noch keine Definitionen wie sich eine solche Ausführungsumgebung ausprägen und welchen Funktionsumfang sie haben muss. Diese Arbeit trägt zu Aspekten der Middleware-Entwicklung für verteilte Kommunikationsnetzwerke in der ubiquitären Datenverarbeitung bei. Der Schwerpunkt liegt auf Middleware und Grundlagentechnologien. Die Beiträge liegen als Konzepte und Ideen für die Entwicklung von Middleware vor. Sie decken die Bereiche Dienstfindung, Dienstaktualisierung, sowie Verträge zwischen Diensten ab. Sie sind in einem Rahmenwerk bereit gestellt, welches auf die Entwicklung von Middleware optimiert ist. Dieses Rahmenwerk, Framework for Applications in Mobile Environments (FAME²) genannt, beinhaltet Richtlinien, eine Definition einer Ausführungsumgebung, sowie Unterstützung für verschiedene Zugriffskontrollmechanismen um Middleware vor unerlaubter Benutzung zu schützen. Das Leistungsspektrum der Ausführungsumgebung von FAME² umfasst: • minimale Ressourcenbenutzung, um auch auf Rechnern mit wenigen Ressourcen, wie z.B. Mobiltelefone und Kleinstrechnern, nutzbar zu sein • Unterstützung für die Anpassung von Middleware durch Änderung der enthaltenen Dienste während die Middleware ausgeführt wird • eine offene Schnittstelle um praktisch jede existierende Lösung für das Finden von Diensten zu verwenden • und eine Möglichkeit der Aktualisierung von Diensten zu deren Laufzeit um damit Fehlerbereinigende, optimierende, und anpassende Wartungsarbeiten an Diensten durchführen zu können Eine begleitende Arbeit ist das Extensible Constraint Framework (ECF), welches Design by Contract (DbC) im Rahmen von FAME² nutzbar macht. DbC ist eine Technologie um Verträge zwischen Diensten zu formulieren und damit die Qualität von Software zu erhöhen. ECF erlaubt das aushandeln sowie die Optimierung von solchen Verträgen.
Resumo:
The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework: ·
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data and a data warehouse. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular we look at two aspects, first how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories --- this is an important and challenging aspect of P-found because the data volumes involved are too large to be centralised. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling new scientific discoveries.
Resumo:
This paper presents a technique to share the data stored in an object-oriented database aimed at designing environments. This technique shares data between two related databases, called the Original and Product databases, and is composed of three processes: data separation, evolution and integration. Whenever a block of data needs to be shared, it is spread into both databases, resulting in a block on the original database, and another into the Product database, with special links between them controlled by the Object Manager. These blocks do not need to be maintained identical during the evolution phase of the sharing process. Six types of links were defined, and by choosing one, the designer control the evolution and reintegration of the block in both databases. This process uses the composite object concept as the unit of control. The presented concepts can be applied to any data model with support to composite objects.
Resumo:
This article presents the implementation of a distributed system of virtual reality, through the integration of services offered by the CORBA platform (Common Object Request Broker Architecture) and by the environment of development of 3D graphic applications in real time, the WorldToolkit, of Sense8. The developed application for the validation of this integration is that of a virtual city, with an emphasis on its traffic ways, vehicles (movable objects) and buildings (immovable objects). In this virtual world, several users can interact, each one controlling his/her own car. Since the modelling of the application took into consideration the criteria and principles of the Transport Engineering, the aim is to use it in the planning, project and construction of traffic ways for vehicles. The system was structured according to the approach client/server utilizing multicast communication among the participating nodes. The chosen implementation for the CORBA was the Iona's ORBIX software. The performance results obtained are presented and discussed in the end.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
The development of cloud computing services is speeding up the rate in which the organizations outsource their computational services or sell their idle computational resources. Even though migrating to the cloud remains a tempting trend from a financial perspective, there are several other aspects that must be taken into account by companies before they decide to do so. One of the most important aspect refers to security: while some cloud computing security issues are inherited from the solutions adopted to create such services, many new security questions that are particular to these solutions also arise, including those related to how the services are organized and which kind of service/data can be placed in the cloud. Aiming to give a better understanding of this complex scenario, in this article we identify and classify the main security concerns and solutions in cloud computing, and propose a taxonomy of security in cloud computing, giving an overview of the current status of security in this emerging technology.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Resumo:
File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.
Resumo:
As a common reference for many in-development standards and execution frameworks, special attention is being paid to Service-Oriented Architectures. SOAs modeling, however, is an area in which a consensus has not being achieved. Currently, standardization organizations are defining proposals to offer a solution to this problem. Nevertheless, until very recently, non-functional aspects of services have not been considered for standardization processes. In particular, there exists a lack of a design solution that permits an independent development of the functional and non-functional concerns of SOAs, allowing that each concern be addressed in a convenient manner in early stages of the development, in a way that could guarantee the quality of this type of systems. This paper, leveraging on previous work, presents an approach to integrate security-related non-functional aspects (such as confidentiality, integrity, and access control) in the development of services.
Resumo:
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.
Resumo:
An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^