854 resultados para Distributed system architecture
Resumo:
This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in long-stay hospitals, and in some cases, in the homes of patients with a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center. The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system will allow to transmit the patient's location and some information about his/her illness to the hospital or care centre
Resumo:
The General Data Protection Regulation (GDPR) has been designed to help promote a view in favor of the interests of individuals instead of large corporations. However, there is the need of more dedicated technologies that can help companies comply with GDPR while enabling people to exercise their rights. We argue that such a dedicated solution must address two main issues: the need for more transparency towards individuals regarding the management of their personal information and their often hindered ability to access and make interoperable personal data in a way that the exercise of one's rights would result in straightforward. We aim to provide a system that helps to push personal data management towards the individual's control, i.e., a personal information management system (PIMS). By using distributed storage and decentralized computing networks to control online services, users' personal information could be shifted towards those directly concerned, i.e., the data subjects. The use of Distributed Ledger Technologies (DLTs) and Decentralized File Storage (DFS) as an implementation of decentralized systems is of paramount importance in this case. The structure of this dissertation follows an incremental approach to describing a set of decentralized systems and models that revolves around personal data and their subjects. Each chapter of this dissertation builds up the previous one and discusses the technical implementation of a system and its relation with the corresponding regulations. We refer to the EU regulatory framework, including GDPR, eIDAS, and Data Governance Act, to build our final system architecture's functional and non-functional drivers. In our PIMS design, personal data is kept in a Personal Data Space (PDS) consisting of encrypted personal data referring to the subject stored in a DFS. On top of that, a network of authorization servers acts as a data intermediary to provide access to potential data recipients through smart contracts.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
Nowadays, there is a trend for industry reorganization in geographically dispersed systems, carried out of their activities with autonomy. These systems must maintain coordinated relationship among themselves in order to assure an expected performance of the overall system. Thus, a manufacturing system is proposed, based on ""web services"" to assure an effective orchestration of services in order to produce final products. In addition, it considers special functions, such as teleoperation and remote monitoring, users` online request, among others. Considering the proposed system as discrete event system (DES), techniques derived from Petri nets (PN), including the Production Flow Schema (PFS), can be used in a PFS/PN approach for modeling. The system is approached in different levels of abstraction: a conceptual model which is obtained by applying the PFS technique and a functional model which is obtained by applying PN. Finally, a particular example of the proposed system is presented.
Resumo:
In this paper we present a mobile recommendation and planning system, named PSiS Mobile. It is designed to provide effective support during a tourist visit through context-aware information and recommendations about points of interest, exploiting tourist preferences and context. Designing a tool like this brings several challenges that must be addressed. We discuss how these challenges have been overcame, present the overall system architecture, since this mobile application extends the PSiS project website, and the mobile application architecture.
Resumo:
Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
O desenvolvimento de software orientado a modelos defende a utilização dos modelos como um artefacto que participa activamente no processo de desenvolvimento. O modelo ocupa uma posição que se encontra ao mesmo nível do código. Esta é uma abordagem importante que tem sido alvo de atenção crescente nos últimos tempos. O Object Management Group (OMG) é o responsável por uma das principais especificações utilizadas na definição da arquitectura dos sistemas cujo desenvolvimento é orientado a modelos: o Model Driven Architecture (MDA). Os projectos que têm surgido no âmbito da modelação e das linguagens específicas de domínio para a plataforma Eclipse são um bom exemplo da atenção dada a estas áreas. São projectos totalmente abertos à comunidade, que procuram respeitar os standards e que constituem uma excelente oportunidade para testar e por em prática novas ideias e abordagens. Nesta dissertação foram usadas ferramentas criadas no âmbito do Amalgamation Project, desenvolvido para a plataforma Eclipse. Explorando o UML e usando a linguagem QVT, desenvolveu-se um processo automático para extrair elementos da arquitectura do sistema a partir da definição de requisitos. Os requisitos são representados por modelos UML que são transformados de forma a obter elementos para uma aproximação inicial à arquitectura do sistema. No final, obtêm-se um modelo UML que agrega os componentes, interfaces e tipos de dados extraídos a partir dos modelos dos requisitos. É uma abordagem orientada a modelos que mostrou ser exequível, capaz de oferecer resultados práticos e promissora no que concerne a trabalho futuro.
Resumo:
A presente dissertação endereça o desenvolvimento de um sistema de visão stereo ativo para os robôs de futebol robótico da equipa ISePorto do ISEP, de modo a que estes tirem o máximo partido das câmaras rotativas neles existentes. Este trabalho surge da necessidade de melhorar a capacidade de perceção do ambiente por parte dos robôs, principalmente da perceção da bola quando não está no plano do campo e dos robôs adversários. Esta necessidade surge devido ao aumento da dinâmica que se tem vindo a veri car ultimamente nas competições. Para tal, foram estudados algumas trabalhos relacionados no que diz respeito a sistemas de visão stereo com baselines variáveis e eixos de rotação em ambas as câmaras, bem como fundamentos de visão stereo. Foi proposta uma arquitetura para o sistema de visão ativo de modo a ser aplicado em qualquer robô da equipa MSL (Middle Size League). Para tornar possível a implementação desta arquitetura foi desenvolvido um procedimento para a calibração e determinação em tempo real dos parâmetros extrínsecos do par stereo em função da posição angular dos eixos rotativos do robô. O sistema de visão foi também dotado de capacidade de sincronismo e foram implementadas funcionalidades ao nível de software que possibilitam a deteção de objetos na imagem, a correspondência de objetos presentes nas imagens de ambas as câmaras e consequentemente a determinação das posições tridimensionais desses objetos relativamente ao robô. O sistema desenvolvido foi testado e validado em cenário MSL ao nível de perceção da bola, robôs adversários e linhas do campo. Os resultados obtidos apresentam uma melhoria signi cativa, face à implementação atual dos robôs, na perceção tridimensional da bola quando não está no plano do campo, e dos robôs adversários.
Resumo:
A navegação e a interpretação do meio envolvente por veículos autónomos em ambientes não estruturados continua a ser um grande desafio na actualidade. Sebastian Thrun, descreve em [Thr02], que o problema do mapeamento em sistemas robóticos é o da aquisição de um modelo espacial do meio envolvente do robô. Neste contexto, a integração de sistemas sensoriais em plataformas robóticas, que permitam a construção de mapas do mundo que as rodeia é de extrema importância. A informação recolhida desses dados pode ser interpretada, tendo aplicabilidade em tarefas de localização, navegação e manipulação de objectos. Até à bem pouco tempo, a generalidade dos sistemas robóticos que realizavam tarefas de mapeamento ou Simultaneous Localization And Mapping (SLAM), utilizavam dispositivos do tipo laser rangefinders e câmaras stereo. Estes equipamentos, para além de serem dispendiosos, fornecem apenas informação bidimensional, recolhidas através de cortes transversais 2D, no caso dos rangefinders. O paradigma deste tipo de tecnologia mudou consideravelmente, com o lançamento no mercado de câmaras RGB-D, como a desenvolvida pela PrimeSense TM e o subsequente lançamento da Kinect, pela Microsoft R para a Xbox 360 no final de 2010. A qualidade do sensor de profundidade, dada a natureza de baixo custo e a sua capacidade de aquisição de dados em tempo real, é incontornável, fazendo com que o sensor se tornasse instantaneamente popular entre pesquisadores e entusiastas. Este avanço tecnológico deu origem a várias ferramentas de desenvolvimento e interacção humana com este tipo de sensor, como por exemplo a Point Cloud Library [RC11] (PCL). Esta ferramenta tem como objectivo fornecer suporte para todos os blocos de construção comuns que uma aplicação 3D necessita, dando especial ênfase ao processamento de nuvens de pontos de n dimensões adquiridas a partir de câmaras RGB-D, bem como scanners laser, câmaras Time-of-Flight ou câmaras stereo. Neste contexto, é realizada nesta dissertação, a avaliação e comparação de alguns dos módulos e métodos constituintes da biblioteca PCL, para a resolução de problemas inerentes à construção e interpretação de mapas, em ambientes indoor não estruturados, utilizando os dados provenientes da Kinect. A partir desta avaliação, é proposta uma arquitectura de sistema que sistematiza o registo de nuvens de pontos, correspondentes a vistas parciais do mundo, num modelo global consistente. Os resultados da avaliação realizada à biblioteca PCL atestam a sua viabilidade, para a resolução dos problemas propostos. Prova da sua viabilidade, são os resultados práticos obtidos, da implementação da arquitectura de sistema proposta, que apresenta resultados de desempenho interessantes, como também boas perspectivas de integração deste tipo de conceitos e tecnologia em plataformas robóticas desenvolvidas no âmbito de projectos do Laboratório de Sistemas Autónomos (LSA).
Resumo:
Fieldbus communication networks aim to interconnect sensors, actuators and controllers within distributed computer-controlled systems. Therefore they constitute the foundation upon which real-time applications are to be implemented. A potential leap towards the use of fieldbus in such time-critical applications lies in the evaluation of its temporal behaviour. In the past few years several research works have been performed on a number of fieldbuses. However, these have mostly focused on the message passing mechanisms, without taking into account the communicating application tasks running in those distributed systems. The main contribution of this paper is to provide an approach for engineering real-time fieldbus systems where the schedulability analysis of the distributed system integrates both the characteristics of the application tasks and the characteristics of the message transactions performed by these tasks. In particular, we address the case of system where the Process-Pascal multitasking language is used to develop P-NET based distributed applications
Resumo:
Despite the steady increase in experimental deployments, most of research work on WSNs has focused only on communication protocols and algorithms, with a clear lack of effective, feasible and usable system architectures, integrated in a modular platform able to address both functional and non–functional requirements. In this paper, we outline EMMON [1], a full WSN-based system architecture for large–scale, dense and real–time embedded monitoring [3] applications. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. Then, EM-Set, the EMMON engineering toolset will be presented. EM-Set includes a network deployment planning, worst–case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset was crucial for the development of EMMON which was designed to use standard commercially available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. Finally, the EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ nodes testbed.
Resumo:
Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
This paper proposes an one-step decentralised coordination model based on an effective feedback mechanism to reduce the complexity of the needed interactions among interdependent nodes of a cooperative distributed system until a collective adaptation behaviour is determined. Positive feedback is used to reinforce the selection of the new desired global service solution, while negative feedback discourages nodes to act in a greedy fashion as this adversely impacts on the provided service levels at neighbouring nodes. The reduced complexity and overhead of the proposed decentralised coordination model are validated through extensive evaluations.
Resumo:
Dynamical systems theory in this work is used as a theoretical language and tool to design a distributed control architecture for a team of three robots that must transport a large object and simultaneously avoid collisions with either static or dynamic obstacles. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constraints are modeled as attractors (i.e. asymptotic stable states) of the behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotical stable states. Computer simulations support the validity of the dynamical model architecture.
Resumo:
In this paper dynamical systems theory is used as a theoretical language and tool to design a distributed control architecture for a team of two robots that must transport a large object and simultaneously avoid collisions with obstacles (either static or dynamic). This work extends the previous work with two robots (see [1] and [5]). However here we demonstrate that it’s possible to simplify the architecture presented in [1] and [5] and reach an equally stable global behavior. The robots have no prior knowledge of the environment. The dynamics of behavior is defined over a state space of behavior variables, heading direction and path velocity. Task constrains are modeled as attractors (i.e. asymptotic stable states) of a behavioral dynamics. For each robot, these attractors are combined into a vector field that governs the behavior. By design the parameters are tuned so that the behavioral variables are always very close to the corresponding attractors. Thus the behavior of each robot is controlled by a time series of asymptotic stable states. Computer simulations support the validity of the dynamical model architecture.