868 resultados para pacs: distributed system software
Resumo:
O aumento da população Mundial, particularmente em Países emergentes como é o caso da China e da Índia, tem-se relevado um problema adicional no que confere às dificuldades associadas ao consumo mundial de energia, pois esta situação limita inequivocamente o acesso destes milhões de pessoas à energia eléctrica para os bens básicos de sobrevivência. Uma das muitas formas de se extinguir esta necessidade, começa a ser desenvolvida recorrendo ao uso de recursos renováveis como fontes de energia. Independentemente do local do mundo onde nos encontremos, essas fontes de energia são abundantes, inesgotáveis e gratuitas. O problema reside na forma como esses recursos renováveis são geridos em função das solicitações de carga que as instalações necessitam. Sistemas híbridos podem ser usados para produzir energia em qualquer parte do mundo. Historicamente este tipo de sistemas eram aplicados em locais isolados, mas nos dias que correm podem ser usados directamente conectados à rede, permitindo que se realize a venda de energia. Foi neste contexto que esta tese foi desenvolvida, com o objectivo de disponibilizar uma ferramenta informática capaz de calcular a rentabilidade de um sistema híbrido ligado à rede ou isolado. Contudo, a complexidade deste problema é muito elevada, pois existe uma extensa panóplia de características e distintos equipamentos que se pode adoptar. Assim, a aplicação informática desenvolvida teve de ser limitada e restringida aos dados disponíveis de forma a poder tornar-se genérica, mas ao mesmo tempo permitir ter uma aplicabilidade prática. O objectivo da ferramenta informática desenvolvida é apresentar de forma imediata os custos da implementação que um sistema híbrido pode acarretar, dependendo apenas de três variáveis distintas. A primeira variável terá de ter em consideração o local de instalação do sistema. Em segundo lugar é o tipo de ligação (isolado ou ligado à rede) e, por fim, o custo dos equipamentos (eólico, solar e restantes componentes) que serão introduzidos. Após a inserção destes dados a aplicação informática apresenta valores estimados de Payback e VAL.
Resumo:
Purpose - This study aims to investigate the influence of tube potential (kVp) variation in relation to perceptual image quality and effective dose (E) for pelvis using automatic exposure control (AEC) and non-AEC in a Computed Radiography (CR) system. Methods and materials - To determine the effects of using AEC and non-AEC by applying the 10 kVp rule in two experiments using an anthropomorphic pelvis phantom. Images were acquired using 10 kVp increments (60–120 kVp) for both experiments. The first experiment, based on seven AEC combinations, produced 49 images. The mean mAs from each kVp increment were used as a baseline for the second experiment producing 35 images. A total of 84 images were produced and a panel of 5 experienced observers participated for the image scoring using the two alternative forced choice (2AFC) visual grading software. PCXMC software was used to estimate E. Results - A decrease in perceptual image quality as the kVp increases was observed both in non-AEC and AEC experiments, however no significant statistical differences (p > 0.05) were found. Image quality scores from all observers at 10 kVp increments for all mAs values using non-AEC mode demonstrates a better score up to 90 kVp. E results show a statistically significant decrease (p = 0.000) on the 75th quartile from 0.37 mSv at 60 kVp to 0.13 mSv at 120 kVp when applying the 10 kVp rule in non-AEC mode. Conclusion - Using the 10 kVp rule, no significant reduction in perceptual image quality is observed when increasing kVp whilst a marked and significant E reduction is observed.
Resumo:
The case of desktop Operating System and Office Suite choices considering Proprietary and Open Source Software alternatives.
Resumo:
Future distribution systems will have to deal with an intensive penetration of distributed energy resources ensuring reliable and secure operation according to the smart grid paradigm. SCADA (Supervisory Control and Data Acquisition) is an essential infrastructure for this evolution. This paper proposes a new conceptual design of an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). This SCADA model is used to support the energy resource management undertaken by a distribution network operator (DNO). Resource management considers all the involved costs, power flows, and electricity prices, allowing the use of network reconfiguration and load curtailment. Locational Marginal Prices (LMP) are evaluated and used in specific situations to apply Demand Response (DR) programs on a global or a local basis. The paper includes a case study using a 114 bus distribution network and load demand based on real data.
Resumo:
In this paper, we analyse the ability of Profibus fieldbus to cope with the real-time requirements of a Distributed Computer Control System (DCCS), where messages associated to discrete events must be made available within a maximum bound time. Our methodology is based on the knowledge of real-time traffic characteristics, setting the network parameters in order to cope with timing requirements. Since non-real-time traffic characteristics are usually unknown at the design stage, we consider an operational profile where, constraining non-real-time traffic at the application level, we assure that realtime requirements are met.
Resumo:
Building reliable real-time applications on top of commercial off-the-shelf (COTS) components is not a straightforward task. Thus, it is essential to provide a simple and transparent programming model, in order to abstract programmers from the low-level implementation details of distribution and replication. However, the recent trend for incorporating pre-emptive multitasking applications in reliable real-time systems inherently increases its complexity. It is therefore important to provide a transparent programming model, enabling pre-emptive multitasking applications to be implemented without resorting to simultaneously dealing with both system requirements and distribution and replication issues. The distributed embedded architecture using COTS components (DEAR-COTS) architecture has been previously proposed as an architecture to support real-time and reliable distributed computer-controlled systems (DCCS) using COTS components. Within the DEAR-COTS architecture, the hard real-time subsystem provides a framework for the development of reliable real-time applications, which are the core of DCCS applications. This paper presents the proposed framework, and demonstrates how it can be used to support the transparent replication of software components.
Resumo:
In Distributed Computer-Controlled Systems (DCCS), both real-time and reliability requirements are of major concern. Architectures for DCCS must be designed considering the integration of processing nodes and the underlying communication infrastructure. Such integration must be provided by appropriate software support services. In this paper, an architecture for DCCS is presented, its structure is outlined, and the services provided by the support software are presented. These are considered in order to guarantee the real-time and reliability requirements placed by current and future systems.
Resumo:
In this paper, we analyse the ability of P-NET [1] fieldbus to cope with the timing requirements of a Distributed Computer Control System (DCCS), where messages associated to discrete events should be made available within a maximum bound time. The main objective of this work is to analyse how the network access and queueing delays, imposed by P-NET’s virtual token Medium Access Control (MAC) mechanism, affect the realtime behaviour of the supported DCCS.
Resumo:
In the past years, Software Architecture has attracted increased attention by academia and industry as the unifying concept to structure the design of complex systems. One particular research area deals with the possibility of reconfiguring architectures to adapt the systems they describe to new requirements. Reconfiguration amounts to adding and removing components and connections, and may have to occur without stopping the execution of the system being reconfigured. This work contributes to the formal description of such a process. Taking as a premise that a single formalism hardly ever satisfies all requirements in every situation, we present three approaches, each one with its own assumptions about the systems it can be applied to and with different advantages and disadvantages. Each approach is based on work of other researchers and has the aesthetic concern of changing as little as possible the original formalism, keeping its spirit. The first approach shows how a given reconfiguration can be specified in the same manner as the system it is applied to and in a way to be efficiently executed. The second approach explores the Chemical Abstract Machine, a formalism for rewriting multisets of terms, to describe architectures, computations, and reconfigurations in a uniform way. The last approach uses a UNITY-like parallel programming design language to describe computations, represents architectures by diagrams in the sense of Category Theory, and specifies reconfigurations by graph transformation rules.
Resumo:
Classical lock-based concurrency control does not scale with current and foreseen multi-core architectures, opening space for alternative concurrency control mechanisms. The concept of transactions executing concurrently in isolation with an underlying mechanism maintaining a consistent system state was already explored in fault-tolerant and distributed systems, and is currently being explored by transactional memory, this time being used to manage concurrent memory access. In this paper we discuss the use of Software Transactional Memory (STM), and how Ada can provide support for it. Furthermore, we draft a general programming interface to transactional memory, supporting future implementations of STM oriented to real-time systems.
Resumo:
This work focuses on highly dynamic distributed systems with Quality of Service (QoS) constraints (most importantly real-time constraints). To that purpose, real-time applications may benefit from code offloading techniques, so that parts of the application can be offloaded and executed, as services, by neighbour nodes, which are willing to cooperate in such computations. These applications explicitly state their QoS requirements, which are translated into resource requirements, in order to evaluate the feasibility of accepting other applications in the system.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective and usable WSN system architectures that address both functional and non-functional requirements in an integrated fashion. This poster outlines the EMMON system architecture for large-scale, dense, real-time embedded monitoring. It provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to maintain as much as flexibility as possible while meeting specific applications requirements. EMMON has been validated through extensive analytical, simulation and experimental evaluations, including through a 300+ nodes test-bed the largest single-site WSN test-bed in Europe.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective, feasible and usable system architectures that address both functional and non-functional requirements in an integrated fashion. In this paper, we outline the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to use standard commercially-available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. The EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
As the complexity of embedded systems increases, multiple services have to compete for the limited resources of a single device. This situation is particularly critical for small embedded devices used in consumer electronics, telecommunication, industrial automation, or automotive systems. In fact, in order to satisfy a set of constraints related to weight, space, and energy consumption, these systems are typically built using microprocessors with lower processing power and limited resources. The CooperatES framework has recently been proposed to tackle these challenges, allowing resource constrained devices to collectively execute services with their neighbours in order to fulfil the complex Quality of Service (QoS) constraints imposed by users and applications. In order to demonstrate the framework's concepts, a prototype is being implemented in the Android platform. This paper discusses key challenges that must be addressed and possible directions to incorporate the desired real-time behaviour in Android.
Resumo:
ARINC specification 653-2 describes the interface between application software and underlying middleware in a distributed real-time avionics system. The real-time workload in this system comprises of partitions, where each partition consists of one or more processes. Processes incur blocking and preemption overheads and can communicate with other processes in the system. In this work we develop compositional techniques for automated scheduling of such partitions and processes. At present, system designers manually schedule partitions based on interactions they have with the partition vendors. This approach is not only time consuming, but can also result in under utilization of resources. In contrast, the technique proposed in this paper is a principled approach for scheduling ARINC-653 partitions and therefore should facilitate system integration.