18 resultados para Replication banding
em Instituto Politécnico do Porto, Portugal
Resumo:
Building reliable real-time applications on top of commercial off-the-shelf (COTS) components is not a straightforward task. Thus, it is essential to provide a simple and transparent programming model, in order to abstract programmers from the low-level implementation details of distribution and replication. However, the recent trend for incorporating pre-emptive multitasking applications in reliable real-time systems inherently increases its complexity. It is therefore important to provide a transparent programming model, enabling pre-emptive multitasking applications to be implemented without resorting to simultaneously dealing with both system requirements and distribution and replication issues. The distributed embedded architecture using COTS components (DEAR-COTS) architecture has been previously proposed as an architecture to support real-time and reliable distributed computer-controlled systems (DCCS) using COTS components. Within the DEAR-COTS architecture, the hard real-time subsystem provides a framework for the development of reliable real-time applications, which are the core of DCCS applications. This paper presents the proposed framework, and demonstrates how it can be used to support the transparent replication of software components.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system's performance. The paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The results show that the proposed heuristics achieve a reasonably higher system's availability than static offline decisions when lower replication ratios are imposed due to resource or cost limitations. The paper introduces a novel approach to coordinate the activation of passive replicas in interdependent distributed environments. The proposed distributed coordination model reduces the complexity of the needed interactions among nodes and is faster to converge to a globally acceptable solution than a traditional centralised approach.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system’s performance. This paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The activation of passive replicas is coordinated through a fast convergence protocol that reduces the complexity of the needed interactions among nodes until a new collective global service solution is determined.
Resumo:
The international Electrotechnical Commission (IEC) 61499 architecture incorporated several function block with which distributed control application may be developed, and how these are interpreted and executed. However, due the distributed nature of the control applications, many issues also need to be taken into account. Most of these are due to the new error model and failure modes of the distributed hardware on which the distributed application is executed and also due the incomplete standards definition of the execution models. IEC 61499 frameworks does not clarify how to handle with replication of software and hardware components. In this paper we propose a replication model for IEC 61499 applications and which mechanisms and protocols may be used for their support.
Resumo:
In-network storage of data in wireless sensor networks contributes to reduce the communications inside the network and to favor data aggregation. In this paper, we consider the use of n out of m codes and data dispersal in combination to in-network storage. In particular, we provide an abstract model of in-network storage to show how n out of m codes can be used, and we discuss how this can be achieved in five cases of study. We also define a model aimed at evaluating the probability of correct data encoding and decoding, we exploit this model and simulations to show how, in the cases of study, the parameters of the n out of m codes and the network should be configured in order to achieve correct data coding and decoding with high probability.
Resumo:
In this paper, we present some of the fault tolerance management mechanisms being implemented in the Multi-μ architecture, namely its support for replica non-determinism. In this architecture, fault tolerance is achieved by node active replication, with software based replica management and fault tolerance transparent algorithms. A software layer implemented between the application and the real-time kernel, the Fault Tolerance Manager (FTManager), is the responsible for the transparent incorporation of the fault tolerance mechanisms The active replication model can be implemented either imposing replica determinism or keeping replica consistency at critical points, by means of interactive agreement mechanisms. One of the Multi-μ architecture goals is to identify such critical points, relieving the underlying system from performing the interactive agreement in every Ada dispatching point.
Resumo:
This paper presents an architecture (Multi-μ) being implemented to study and develop software based fault tolerant mechanisms for Real-Time Systems, using the Ada language (Ada 95) and Commercial Off-The-Shelf (COTS) components. Several issues regarding fault tolerance are presented and mechanisms to achieve fault tolerance by software active replication in Ada 95 are discussed. The Multi-μ architecture, based on a specifically proposed Fault Tolerance Manager (FTManager), is then described. Finally, some considerations are made about the work being done and essential future developments.
Resumo:
The great majority of the courses on science and technology areas where lab work is a fundamental part of the apprenticeship was not until recently available to be taught at distance. This reality is changing with the dissemination of remote laboratories. Supported by resources based on new information and communication technologies, it is now possible to remotely control a wide variety of real laboratories. However, most of them are designed specifically to this purpose, are inflexible and only on its functionality they resemble the real ones. In this paper, an alternative remote lab infrastructure devoted to the study of electronics is presented. Its main characteristics are, from a teacher's perspective, reusability and simplicity of use, and from a students' point of view, an exact replication of the real lab, enabling them to complement or finish at home the work started at class. The remote laboratory is integrated in the Learning Management System in use at the school, and therefore, may be combined with other web experiments and e-learning strategies, while safeguarding security access issues.
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
A área da simulação computacional teve um rápido crescimento desde o seu apareciment, sendo actualmente uma das ciências de gestão e de investigação operacional mais utilizadas. O seu princípio baseia-se na replicação da operação de processos ou sistemas ao longo de períodos de tempo, tornando-se assim uma metodologia indispensável para a resolução de variados problemas do mundo real, independentemente da sua complexidade. Das inúmeras áreas de aplicação, nos mais diversos campos, a que mais se destaca é a utilização em sistemas de produção, onde o leque de aplicações disponível é muito vasto. A sua aplicação tem vindo a ser utilizada para solucionar problemas em sistemas de produção, uma vez que permite às empresas ajustar e planear de uma maneira rápida, eficaz e ponderada as suas operações e os seus sistemas, permitindo assim uma rápida adaptação das mesmas às constantes mudanças das necessidades da economia global. As aplicações e packages de simulação têm seguindo as tendências tecnológicas pelo que é notório o recurso a tecnologias orientadas a objectos para o desenvolvimento das mesmas. Este estudo baseou-se, numa primeira fase, na recolha de informação de suporte aos conceitos de modelação e simulação, bem como a respectiva aplicação a sistemas de produção em tempo real. Posteriormente centralizou-se no desenvolvimento de um protótipo de uma aplicação de simulação de ambientes de fabrico em tempo real. O desenvolvimento desta ferramenta teve em vista eventuais fins pedagógicos e uma utilização a nível académico, sendo esta capaz de simular um modelo de um sistema de produção, estando também dotada de animação. Sem deixar de parte a possibilidade de integração de outros módulos ou, até mesmo, em outras plataformas, houve ainda a preocupação acrescida de que a sua implementação recorresse a metodologias de desenvolvimento orientadas a objectos.
Resumo:
Dynamically reconfigurable systems have benefited from a new class of FPGAs recently introduced into the market, which allow partial and dynamic reconfiguration at run-time, enabling multiple independent functions from different applications to share the same device, swapping resources as needed. When the sequence of tasks to be performed is not predictable, resource allocation decisions have to be made on-line, fragmenting the FPGA logic space. A rearrangement may be necessary to get enough contiguous space to efficiently implement incoming functions, to avoid spreading their components and, as a result, degrading their performance. This paper presents a novel active replication mechanism for configurable logic blocks (CLBs), able to implement on-line rearrangements, defragmenting the available FPGA resources without disturbing those functions that are currently running.
Resumo:
Para uma melhor avaliação e definição do plano de intervenção do indivíduo, é cada vez mais importante a existência instrumentos de avaliação válidos e fiáveis para a população portuguesa. Objetivo: Traduzir e adaptar para a população Portuguesa a escala Trunk Impairment Scale (TIS) em pacientes pós-AVE, e avaliar as propriedades psicométricas da mesma. Metodologia: A TIS foi traduzida para o Português e adaptada culturalmente para a população portuguesa. As propriedades psicométricas da mesma, incluindo validade, fiabilidade, concordância inter-observadores, consistência interna, sensibilidade, especificidade, poder de resposta, foram avaliadas numa população diagnosticada com AVE e num grupo de controlo de participantes saudáveis. Participaram neste estudo 80 indivíduos, divididos em dois grupos, nomeadamente indivíduos pós-AVE (40) e um grupo sem patologia (40). Os participantes foram submetidos à aplicação das escalas de Berg, Medida de Independência Funcional e Escala de Desempenho Físico Fugl Meyer e a TIS de modo a avaliar as propriedades psicométricas desta. As avaliações foram realizadas por duas fisioterapeutas experientes e o re-teste foi realizado após 48 horas. Os dados foram registados e trabalhados com o programa informático SPSS 21.0. Resultados: Relativamente aos valores obtidos, verificou-se que, quanto à consistência interna da TIS estes apresentam-se de forma moderada a elevada (alfa Cronbach = 0,909). Quanto à fiabilidade inter-observadores, os itens com menor valor são os itens 1 e 4 (0,759 e 0,527, respetivamente) e os itens com valor de Kappa mais alto são os itens 5 e 6 (0,830 e 0,893, respetivamente). Relativamente à validade de critério, verificou-se que não houve correlação entre a escala de Desempenho Físico Fugl-Meyer, a escala de Equilibrio de Berg e a Medida de Independência Funcional, ou seja, os valores obtidos r=0,166; r=0,017; r= -0,002, respetivamente. Quanto à validade de construção, constatou-se que o valor da mediana é mais elevado nos itens 1 a 5, logo sugere que haja diferenças entre o grupo de indivíduos pós-AVE e o grupo de indivíduos saudáveis (p<0,001). Entre os outros dois itens (6 e 7) não foram encontradas diferenças nas respostas nos dois grupos, sendo o valor de p > 0,001. Conclusão: Os resultados obtidos neste estudo sugerem que a versão portuguesa da TIS apresenta bons níveis de fiabilidade, consistência interna e também apresenta bons resultados no que refere à concordância inter-observadores.
Resumo:
A velocidade de difusão de conteúdos numa plataforma web, assume uma elevada relevância em serviços onde a informação se pretende atualizada e em tempo real. Este projeto de Mestrado, apresenta uma abordagem de um sistema distribuído de recolher e difundir resultados em tempo real entre várias plataformas, nomeadamente sistemas móveis. Neste contexto, tempo real entende-se como uma diferença de tempo nula entre a recolha e difusão, ignorando fatores que não podem ser controlados pelo sistema, como latência de comunicação e tempo de processamento. Este projeto tem como base uma arquitetura existente de processamento e publicação de resultados desportivos, que apresentava alguns problemas relacionados com escalabilidade, segurança, tempos de entrega de resultados longos e sem integração com outras plataformas. Ao longo deste trabalho procurou-se investigar fatores que condicionassem a escalabilidade de uma aplicação web dando ênfase à implementação de uma solução baseada em replicação e escalabilidade horizontal. Procurou-se também apresentar uma solução de interoperabilidade entre sistemas e plataformas heterogêneas, mantendo sempre elevados níveis de performance e promovendo a introdução de plataformas móveis no sistema. De várias abordagens existentes para comunicação em tempo real sobre uma plataforma web, adotou-se um implementação baseada em WebSocket que elimina o tempo desperdiçado entre a recolha de informação e sua difusão. Neste projeto é descrito o processo de implementação da API de recolha de dados (Collector), da biblioteca de comunicação com o Collector, da aplicação web (Publisher) e sua API, da biblioteca de comunicação com o Publisher e por fim a implementação da aplicação móvel multi-plataforma. Com os componentes criados, avaliaram-se os resultados obtidos com a nova arquitetura de forma a aferir a escalabilidade e performance da solução criada e sua adaptação ao sistema existente.
Resumo:
Iron plays a central role in host-parasite interactions, since both intervenients need iron for survival and growth, but are sensitive to iron-mediated toxicity. The host’s iron overload is often associated with susceptibility to infection. However, it has been previously reported that iron overload prevented the growth of Leishmania major, an agent of cutaneous leishmaniasis, in BALB/c mice. In order to further clarify the impact of iron modulation on the growth of Leishmania in vivo, we studied the effects of iron supplementation or deprivation on the growth of L. infantum, the causative agent of Mediterranean visceral leishmaniasis, in the mouse model. We found that dietary iron deficiency did not affect the protozoan growth, whereas iron overload decreased its replication in the liver and spleen of a susceptible mouse strain. The fact that the iron-induced inhibitory effect could not be seen in mice deficient in NADPH dependent oxidase or nitric oxide synthase 2 suggests that iron eliminates L. infantum in vivo through the interaction with reactive oxygen and nitrogen species. Iron overload did not significantly alter the mouse adaptive immune response against L. infantum. Furthermore, the inhibitory action of iron towards L. infantum was also observed, in a dose dependent manner, in axenic cultures of promastigotes and amastigotes. Importantly, high iron concentrations were needed to achieve such effects. In conclusion, externally added iron synergizes with the host’s oxidative mechanisms of defense in eliminating L. infantum from mouse tissues. Additionally, the direct toxicity of iron against Leishmania suggests a potential use of this metal as a therapeutic tool or the further exploration of iron anti-parasitic mechanisms for the design of new drugs.
Resumo:
Despite the abundant literature in knowledge management, few empirical studies have explored knowledge management in connection with international assignees. This phenomenon has a special relevance in the Portuguese context, since (a) there are no empirical studies concerning this issue that involves international Portuguese companies; (b) the national business reality is incipient as far as internationalisation is concerned, and; (c) the organisational and national culture presents characteristics that are distinctive from the most highly studied contexts (e.g., Asia, USA, Scandinavian countries, Spain, France, The Netherlands, Germany, England and Russia). We examine the role of expatriates in transfer and knowledge sharing within the Portuguese companies with operations abroad. We focus specifically on expatriates’ role on knowledge sharing connected to international Portuguese companies and our findings take into account organizational representatives’ and expatriates’ perspectives. Using a comparative case study approach, we examine how three main dimensions influence the role of expatriates in knowledge sharing among headquarters and their subsidiaries (types of international assignment, reasons for using expatriation and international assignment characteristics). Data were collected using semi‐structured interviews to 30 Portuguese repatriates and 14 organizational representatives from seven Portuguese companies. The findings suggest that the reasons that lead Portuguese companies to expatriating employees are connected to: (1) business expansion needs; (2) control of international operations and; (3) transfer and knowledge sharing. Our study also shows that Portuguese companies use international assignments in order to positively respond to the increasingly decaying domestic market in the economic areas in which they operate. Evidence also reveals that expatriation is seen as a strategy to fulfill main organizational objectives through their expatriates (e.g., business internationalization, improvement of the coordination and control level of the units/subsidiaries abroad, replication of aspects of the home base, development and incorporation of new organizational techniques and processes). We also conclude that Portuguese companies have developed an International Human Resources Management strategy, based on an ethnocentric approach, typically associated with companies in early stages of internationalization, i.e., the authority and decision making are centered in the home base. Expatriates have a central role in transmitting culture and technical knowledge from company’s headquarters to the company’s branches. Based on the findings, the article will discuss in detail the main theoretical and managerial implications. Suggestions for further research will also be presented.