966 resultados para Distributed processing
Resumo:
WorldFIP is standardised as European Norm EN 50170 - General Purpose Field Communication System. Field communication systems (fieldbuses) started to be widely used as the communication support for distributed computer-controlled systems (DCCS), and are being used in all sorts of process control and manufacturing applications within different types of industries. There are several advantages in using fieldbuses as a replacement of for the traditional point-to-point links between sensors/actuators and computer-based control systems. Indeed they concern economical ones (cable savings) but, importantly, fieldbuses allow an increased decentralisation and distribution of the processing power over the field. Typically DCCS have real-time requirements that must be fulfilled. By this, we mean that process data must be transferred between network computing nodes within a maximum admissible time span. WorldFIP has very interesting mechanisms to schedule data transfers. It explicit distinguishes to types of traffic: periodic and aperiodic. In this paper we describe how WorldFIP handles these two types of traffic, and more importantly, we provide a comprehensive analysis for guaranteeing the real-time requirements of both types of traffic. A major contribution is made in the analysis of worst-case response time of aperiodic transfer requests.
Resumo:
Fieldbus communication networks aim to interconnect sensors, actuators and controllers within process control applications. Therefore, they constitute the foundation upon which real-time distributed computer-controlled systems can be implemented. P-NET is a fieldbus communication standard, which uses a virtual token-passing medium-access-control mechanism. In this paper pre-run-time schedulability conditions for supporting real-time traffic with P-NET networks are established. Essentially, formulae to evaluate the upper bound of the end-to-end communication delay in P-NET messages are provided. Using this upper bound, a feasibility test is then provided to check the timing requirements for accessing remote process variables. This paper also shows how P-NET network segmentation can significantly reduce the end-to-end communication delays for messages with stringent timing requirements.
Resumo:
In the past few years, a significant amount of work has been devoted to the timing analysis of Ethernet-based technologies. However, none of these address the problem of timeliness evaluation at a holistic level. This paper describes a research framework embracing this objective. It is advocated that, simulation models can be a powerful tool, not only for timeliness evaluation, but also to enable the introduction of less pessimistic assumptions in an analytical response time approach, which, most often, are afflicted with simplifications leading to pessimistic assumptions and, therefore, delusive results. To this end, we address a few inter-linked research topics with the purpose of setting a framework for developing tools suitable to extract temporal properties of commercial-off-the-shelf (COTS) factory-floor communication systems.
Resumo:
It is generally challenging to determine end-to-end delays of applications for maximizing the aggregate system utility subject to timing constraints. Many practical approaches suggest the use of intermediate deadline of tasks in order to control and upper-bound their end-to-end delays. This paper proposes a unified framework for different time-sensitive, global optimization problems, and solves them in a distributed manner using Lagrangian duality. The framework uses global viewpoints to assign intermediate deadlines, taking resource contention among tasks into consideration. For soft real-time tasks, the proposed framework effectively addresses the deadline assignment problem while maximizing the aggregate quality of service. For hard real-time tasks, we show that existing heuristic solutions to the deadline assignment problem can be incorporated into the proposed framework, enriching their mathematical interpretation.
Resumo:
Coronary artery disease (CAD) is currently one of the most prevalent diseases in the world population and calcium deposits in coronary arteries are one direct risk factor. These can be assessed by the calcium score (CS) application, available via a computed tomography (CT) scan, which gives an accurate indication of the development of the disease. However, the ionising radiation applied to patients is high. This study aimed to optimise the protocol acquisition in order to reduce the radiation dose and explain the flow of procedures to quantify CAD. The main differences in the clinical results, when automated or semiautomated post-processing is used, will be shown, and the epidemiology, imaging, risk factors and prognosis of the disease described. The software steps and the values that allow the risk of developingCADto be predicted will be presented. A64-row multidetector CT scan with dual source and two phantoms (pig hearts) were used to demonstrate the advantages and disadvantages of the Agatston method. The tube energy was balanced. Two measurements were obtained in each of the three experimental protocols (64, 128, 256 mAs). Considerable changes appeared between the values of CS relating to the protocol variation. The predefined standard protocol provided the lowest dose of radiation (0.43 mGy). This study found that the variation in the radiation dose between protocols, taking into consideration the dose control systems attached to the CT equipment and image quality, was not sufficient to justify changing the default protocol provided by the manufacturer.
Resumo:
In this paper, we analyse the ability of P-NET [1] fieldbus to cope with the timing requirements of a Distributed Computer Control System (DCCS), where messages associated to discrete events should be made available within a maximum bound time. The main objective of this work is to analyse how the network access and queueing delays, imposed by P-NET’s virtual token Medium Access Control (MAC) mechanism, affect the realtime behaviour of the supported DCCS.
Resumo:
Moving towards autonomous operation and management of increasingly complex open distributed real-time systems poses very significant challenges. This is particularly true when reaction to events must be done in a timely and predictable manner while guaranteeing Quality of Service (QoS) constraints imposed by users, the environment, or applications. In these scenarios, the system should be able to maintain a global feasible QoS level while allowing individual nodes to autonomously adapt under different constraints of resource availability and input quality. This paper shows how decentralised coordination of a group of autonomous interdependent nodes can emerge with little communication, based on the robust self-organising principles of feedback. Positive feedback is used to reinforce the selection of the new desired global service solution, while negative feedback discourages nodes to act in a greedy fashion as this adversely impacts on the provided service levels at neighbouring nodes. The proposed protocol is general enough to be used in a wide range of scenarios characterised by a high degree of openness and dynamism where coordination tasks need to be time dependent. As the reported results demonstrate, it requires less messages to be exchanged and it is faster to achieve a globally acceptable near-optimal solution than other available approaches.
Resumo:
This paper presents a distributed predictive control methodology for indoor thermal comfort that optimizes the consumption of a limited shared energy resource using an integrated demand-side management approach that involves a power price auction and an appliance loads allocation scheme. The control objective for each subsystem (house or building) aims to minimize the energy cost while maintaining the indoor temperature inside comfort limits. In a distributed coordinated multi-agent ecosystem, each house or building control agent achieves its objectives while sharing, among them, the available energy through the introduction of particular coupling constraints in their underlying optimization problem. Coordination is maintained by a daily green energy auction bring in a demand-side management approach. Also the implemented distributed MPC algorithm is described and validated with simulation studies.
Resumo:
This work focuses on highly dynamic distributed systems with Quality of Service (QoS) constraints (most importantly real-time constraints). To that purpose, real-time applications may benefit from code offloading techniques, so that parts of the application can be offloaded and executed, as services, by neighbour nodes, which are willing to cooperate in such computations. These applications explicitly state their QoS requirements, which are translated into resource requirements, in order to evaluate the feasibility of accepting other applications in the system.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system's performance. The paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The results show that the proposed heuristics achieve a reasonably higher system's availability than static offline decisions when lower replication ratios are imposed due to resource or cost limitations. The paper introduces a novel approach to coordinate the activation of passive replicas in interdependent distributed environments. The proposed distributed coordination model reduces the complexity of the needed interactions among nodes and is faster to converge to a globally acceptable solution than a traditional centralised approach.
Resumo:
As the complexity of embedded systems increases, multiple services have to compete for the limited resources of a single device. This situation is particularly critical for small embedded devices used in consumer electronics, telecommunication, industrial automation, or automotive systems. In fact, in order to satisfy a set of constraints related to weight, space, and energy consumption, these systems are typically built using microprocessors with lower processing power and limited resources. The CooperatES framework has recently been proposed to tackle these challenges, allowing resource constrained devices to collectively execute services with their neighbours in order to fulfil the complex Quality of Service (QoS) constraints imposed by users and applications. In order to demonstrate the framework's concepts, a prototype is being implemented in the Android platform. This paper discusses key challenges that must be addressed and possible directions to incorporate the desired real-time behaviour in Android.
Resumo:
Hexagonal wireless sensor network refers to a network topology where a subset of nodes have six peer neighbors. These nodes form a backbone for multi-hop communications. In a previous work, we proposed the use of hexagonal topology in wireless sensor networks and discussed its properties in relation to real-time (bounded latency) multi-hop communications in large-scale deployments. In that work, we did not consider the problem of hexagonal topology formation in practice - which is the subject of this research. In this paper, we present a decentralized algorithm that forms the hexagonal topology backbone in an arbitrary but sufficiently dense network deployment. We implemented a prototype of our algorithm in NesC for TinyOS based platforms. We present data from field tests of our implementation, collected using a deployment of fifty wireless sensor nodes.
Resumo:
In distributed soft real-time systems, maximizing the aggregate quality-of-service (QoS) is a typical system-wide goal, and addressing the problem through distributed optimization is challenging. Subtasks are subject to unpredictable failures in many practical environments, and this makes the problem much harder. In this paper, we present a robust optimization framework for maximizing the aggregate QoS in the presence of random failures. We introduce the notion of K-failure to bound the effect of random failures on schedulability. Using this notion we define the concept of K-robustness that quantifies the degree of robustness on QoS guarantee in a probabilistic sense. The parameter K helps to tradeoff achievable QoS versus robustness. The proposed robust framework produces optimal solutions through distributed computations on the basis of Lagrangian duality, and we present some implementation techniques. Our simulation results show that the proposed framework can probabilistically guarantee sub-optimal QoS which remains feasible even in the presence of random failures.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system’s performance. This paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The activation of passive replicas is coordinated through a fast convergence protocol that reduces the complexity of the needed interactions among nodes until a new collective global service solution is determined.
Resumo:
Cooperating objects (COs) is a recently coined term used to signify the convergence of classical embedded computer systems, wireless sensor networks and robotics and control. We present essential elements of a reference architecture for scalable data processing for the CO paradigm.