7 resultados para Arquitectura de computadores
em Repositório Institucional da Universidade de Aveiro - Portugal
Resumo:
Access control is a software engineering challenge in database applications. Currently, there is no satisfactory solution to dynamically implement evolving fine-grained access control mechanisms (FGACM) on business tiers of relational database applications. To tackle this access control gap, we propose an architecture, herein referred to as Dynamic Access Control Architecture (DACA). DACA allows FGACM to be dynamically built and updated at runtime in accordance with the established fine-grained access control policies (FGACP). DACA explores and makes use of Call Level Interfaces (CLI) features to implement FGACM on business tiers. Among the features, we emphasize their performance and their multiple access modes to data residing on relational databases. The different access modes of CLI are wrapped by typed objects driven by FGACM, which are built and updated at runtime. Programmers prescind of traditional access modes of CLI and start using the ones dynamically implemented and updated. DACA comprises three main components: Policy Server (repository of metadata for FGACM), Dynamic Access Control Component (DACC) (business tier component responsible for implementing FGACM) and Policy Manager (broker between DACC and Policy Server). Unlike current approaches, DACA is not dependent on any particular access control model or on any access control policy, this way promoting its applicability to a wide range of different situations. In order to validate DACA, a solution based on Java, Java Database Connectivity (JDBC) and SQL Server was devised and implemented. Two evaluations were carried out. The first one evaluates DACA capability to implement and update FGACM dynamically, at runtime, and, the second one assesses DACA performance against a standard use of JDBC without any FGACM. The collected results show that DACA is an effective approach for implementing evolving FGACM on business tiers based on Call Level Interfaces, in this case JDBC.
Resumo:
O trabalho apresentado nesta dissertação teve por objectivo principal a concepção, modelação e desenvolvimento de uma plataforma de middleware que permitisse a integração de sistemas de informação, em todos os seus níveis (dados, lógico e apresentação), perfazendo uma federação de bibliotecas digitais distribuídas e ecléticas. Para este fim, foram estudadas as várias abordagens de modelação e organização das bibliotecas digitais, assim como os diversos sistemas e tecnologias de suporte existentes no momento inicial do trabalho. Compreendendo a existência de muitas lacunas ainda neste domínio, nomeadamente ao nível da interoperabilidade de sistemas heterogéneos e integração da semântica de metadados, decidiu-se proceder a um trabalho de investigação e desenvolvimento que pudesse apresentar eventuais soluções para o preenchimento de tais lacunas. Desta forma, surgem neste trabalho duas tecnologias, o XML e o Dublin Core, que servem de base a todas as restantes tecnologias usadas para a interoperabilidade e para a integração. Ainda utilizando estas tecnologias base, foram estudados e desenvolvidos meios simples, mas eficientes, de salvaguarda, indexação e pesquisa de informação, tentando manter a independência face aos grandes produtores de bases de dados, que só por si não resolvem alguns dos problemas mais críticos da investigação no domínio das bibliotecas digitais. ABSTRACT: The main objective of the work presented in this dissertation is the design, modulation and development of a middleware framework to allow information systems interoperability, in all their scope (data, logic and presentation), to accomplish a distributed and eclectic digital libraries federation. Several modulations and organizations were approached, and several support systems and technologies were studied. Understanding the existence of many gaps in this domain, namely in heterogeneous information systems interoperation and metadata semantic integration, it was decided to conduct a research and development work, which, eventually, could present some solutions to fill in these gaps. In this way, two technologies, XML and Dublin Core, appear to serve as the basis of all remaining technologies, to interoperate and to achieve semantic integration. Using yet these technologies, it was also studied and developed simple means, but efficient ones, to save, index and query information, preserving the independence from major data base producers, which by their selves don’t solve critical problems in the digital libraries research domain.
Resumo:
In an information-driven society where the volume and value of produced and consumed data assumes a growing importance, the role of digital libraries gains particular importance. This work analyzes the limitations in current digital library management systems and the opportunities brought by recent distributed computing models. The result of this work is the implementation of the University of Aveiro integrated system for digital libraries and archives. It concludes by analyzing the system in production and proposing a new service oriented digital library architecture supported in a peer-to-peer infrastructure
Resumo:
The electronic storage of medical patient data is becoming a daily experience in most of the practices and hospitals worldwide. However, much of the data available is in free-form text, a convenient way of expressing concepts and events, but especially challenging if one wants to perform automatic searches, summarization or statistical analysis. Information Extraction can relieve some of these problems by offering a semantically informed interpretation and abstraction of the texts. MedInX, the Medical Information eXtraction system presented in this document, is the first information extraction system developed to process textual clinical discharge records written in Portuguese. The main goal of the system is to improve access to the information locked up in unstructured text, and, consequently, the efficiency of the health care process, by allowing faster and reliable access to quality information on health, for both patient and health professionals. MedInX components are based on Natural Language Processing principles, and provide several mechanisms to read, process and utilize external resources, such as terminologies and ontologies, in the process of automatic mapping of free text reports onto a structured representation. However, the flexible and scalable architecture of the system, also allowed its application to the task of Named Entity Recognition on a shared evaluation contest focused on Portuguese general domain free-form texts. The evaluation of the system on a set of authentic hospital discharge letters indicates that the system performs with 95% F-measure, on the task of entity recognition, and 95% precision on the task of relation extraction. Example applications, demonstrating the use of MedInX capabilities in real applications in the hospital setting, are also presented in this document. These applications were designed to answer common clinical problems related with the automatic coding of diagnoses and other health-related conditions described in the documents, according to the international classification systems ICD-9-CM and ICF. The automatic review of the content and completeness of the documents is an example of another developed application, denominated MedInX Clinical Audit system.
Resumo:
In the modern society, communications and digital transactions are becoming the norm rather than the exception. As we allow networked computing devices into our every-day actions, we build a digital lifestyle where networks and devices enrich our interactions. However, as we move our information towards a connected digital environment, privacy becomes extremely important as most of our personal information can be found in the network. This is especially relevant as we design and adopt next generation networks that provide ubiquitous access to services and content, increasing the impact and pervasiveness of existing networks. The environments that provide widespread connectivity and services usually rely on network protocols that have few privacy considerations, compromising user privacy. The presented work focuses on the network aspects of privacy, considering how network protocols threaten user privacy, especially on next generation networks scenarios. We target the identifiers that are present in each network protocol and support its designed function. By studying how the network identifiers can compromise user privacy, we explore how these threats can stem from the identifier itself and from relationships established between several protocol identifiers. Following the study focused on identifiers, we show that privacy in the network can be explored along two dimensions: a vertical dimension that establishes privacy relationships across several layers and protocols, reaching the user, and a horizontal dimension that highlights the threats exposed by individual protocols, usually confined to a single layer. With these concepts, we outline an integrated perspective on privacy in the network, embracing both vertical and horizontal interactions of privacy. This approach enables the discussion of several mechanisms to address privacy threats on individual layers, leading to architectural instantiations focused on user privacy. We also show how the different dimensions of privacy can provide insight into the relationships that exist in a layered network stack, providing a potential path towards designing and implementing future privacy-aware network architectures.
Resumo:
When developing software for autonomous mobile robots, one has to inevitably tackle some kind of perception. Moreover, when dealing with agents that possess some level of reasoning for executing their actions, there is the need to model the environment and the robot internal state in a way that it represents the scenario in which the robot operates. Inserted in the ATRI group, part of the IEETA research unit at Aveiro University, this work uses two of the projects of the group as test bed, particularly in the scenario of robotic soccer with real robots. With the main objective of developing algorithms for sensor and information fusion that could be used e ectively on these teams, several state of the art approaches were studied, implemented and adapted to each of the robot types. Within the MSL RoboCup team CAMBADA, the main focus was the perception of ball and obstacles, with the creation of models capable of providing extended information so that the reasoning of the robot can be ever more e ective. To achieve it, several methodologies were analyzed, implemented, compared and improved. Concerning the ball, an analysis of ltering methodologies for stabilization of its position and estimation of its velocity was performed. Also, with the goal keeper in mind, work has been done to provide it with information of aerial balls. As for obstacles, a new de nition of the way they are perceived by the vision and the type of information provided was created, as well as a methodology for identifying which of the obstacles are team mates. Also, a tracking algorithm was developed, which ultimately assigned each of the obstacles a unique identi er. Associated with the improvement of the obstacles perception, a new algorithm of estimating reactive obstacle avoidance was created. In the context of the SPL RoboCup team Portuguese Team, besides the inevitable adaptation of many of the algorithms already developed for sensor and information fusion and considering that it was recently created, the objective was to create a sustainable software architecture that could be the base for future modular development. The software architecture created is based on a series of di erent processes and the means of communication among them. All processes were created or adapted for the new architecture and a base set of roles and behaviors was de ned during this work to achieve a base functional framework. In terms of perception, the main focus was to de ne a projection model and camera pose extraction that could provide information in metric coordinates. The second main objective was to adapt the CAMBADA localization algorithm to work on the NAO robots, considering all the limitations it presents when comparing to the MSL team, especially in terms of computational resources. A set of support tools were developed or improved in order to support the test and development in both teams. In general, the work developed during this thesis improved the performance of the teams during play and also the e ectiveness of the developers team when in development and test phases.
Resumo:
The main motivation for the work presented here began with previously conducted experiments with a programming concept at the time named "Macro". These experiments led to the conviction that it would be possible to build a system of engine control from scratch, which could eliminate many of the current problems of engine management systems in a direct and intrinsic way. It was also hoped that it would minimize the full range of software and hardware needed to make a final and fully functional system. Initially, this paper proposes to make a comprehensive survey of the state of the art in the specific area of software and corresponding hardware of automotive tools and automotive ECUs. Problems arising from such software will be identified, and it will be clear that practically all of these problems stem directly or indirectly from the fact that we continue to make comprehensive use of extremely long and complex "tool chains". Similarly, in the hardware, it will be argued that the problems stem from the extreme complexity and inter-dependency inside processor architectures. The conclusions are presented through an extensive list of "pitfalls" which will be thoroughly enumerated, identified and characterized. Solutions will also be proposed for the various current issues and for the implementation of these same solutions. All this final work will be part of a "proof-of-concept" system called "ECU2010". The central element of this system is the before mentioned "Macro" concept, which is an graphical block representing one of many operations required in a automotive system having arithmetic, logic, filtering, integration, multiplexing functions among others. The end result of the proposed work is a single tool, fully integrated, enabling the development and management of the entire system in one simple visual interface. Part of the presented result relies on a hardware platform fully adapted to the software, as well as enabling high flexibility and scalability in addition to using exactly the same technology for ECU, data logger and peripherals alike. Current systems rely on a mostly evolutionary path, only allowing online calibration of parameters, but never the online alteration of their own automotive functionality algorithms. By contrast, the system developed and described in this thesis had the advantage of following a "clean-slate" approach, whereby everything could be rethought globally. In the end, out of all the system characteristics, "LIVE-Prototyping" is the most relevant feature, allowing the adjustment of automotive algorithms (eg. Injection, ignition, lambda control, etc.) 100% online, keeping the engine constantly working, without ever having to stop or reboot to make such changes. This consequently eliminates any "turnaround delay" typically present in current automotive systems, thereby enhancing the efficiency and handling of such systems.