908 resultados para Real systems
Resumo:
Abstract: The growing proliferation of management systems standards (MSSs), and their individualized implementation, is a real problem faced by organizations. On the other hand, MSSs are aimed at improving efficiency and effectiveness of organizational responses in order to satisfy the requirements, needs and expectations of the stakeholders. Each organization has its own identity and this is an issue that cannot be neglected; hence, two possible approaches can be attended. First, continue with the implementation of individualized management systems (MSs); or, integrate the several MSSs versus related MSs into an integrated management system (IMS). Therefore, in this context, organizations are faced with a dilemma, as a result of the increasing proliferation and diversity of MSSs. This paper takes into account the knowledge gained through a case study conducted in the context of a Portuguese company and unveils some of the advantages and disadvantages of integration. A methodology is also proposed and presented to support organizations in developing and structuring the integration process of their individualized MSs, and consequently minimize problems that are generators of inefficiencies, value destruction and loss of competitiveness. The obtained results provide relevant information that can support Top Management decision in solving that dilemma and consequently promote a successful integration, including a better control of business risks associated to MSSs requirements and enhancing sustainable performance, considering the context in which organizations operate.
Resumo:
The study of electricity markets operation has been gaining an increasing importance in last years, as result of the new challenges that the electricity markets restructuring produced. This restructuring increased the competitiveness of the market, but with it its complexity. The growing complexity and unpredictability of the market’s evolution consequently increases the decision making difficulty. Therefore, the intervenient entities are forced to rethink their behaviour and market strategies. Currently, lots of information concerning electricity markets is available. These data, concerning innumerous regards of electricity markets operation, is accessible free of charge, and it is essential for understanding and suitably modelling electricity markets. This paper proposes a tool which is able to handle, store and dynamically update data. The development of the proposed tool is expected to be of great importance to improve the comprehension of electricity markets and the interactions among the involved entities.
Resumo:
In this abstract is presented an energy management system included in a SCADA system existent in a intelligent home. The system control the home energy resources according to the players definitions (electricity consumption and comfort levels), the electricity prices variation in real time mode and the DR events proposed by the aggregators.
Resumo:
The increasing importance given by environmental policies to the dissemination and use of wind power has led to its fast and large integration in power systems. In most cases, this integration has been done in an intensive way, causing several impacts and challenges in current and future power systems operation and planning. One of these challenges is dealing with the system conditions in which the available wind power is higher than the system demand. This is one of the possible applications of demand response, which is a very promising resource in the context of competitive environments that integrates even more amounts of distributed energy resources, as well as new players. The methodology proposed aims the maximization of the social welfare in a smart grid operated by a virtual power player that manages the available energy resources. When facing excessive wind power generation availability, real time pricing is applied in order to induce the increase of consumption so that wind curtailment is minimized. The proposed method is especially useful when actual and day-ahead wind forecast differ significantly. The proposed method has been computationally implemented in GAMS optimization tool and its application is illustrated in this paper using a real 937-bus distribution network with 20310 consumers and 548 distributed generators, some of them with must take contracts.
Resumo:
Recent changes in power systems mainly due to the substantial increase of distributed generation and to the operation in competitive environments has created new challenges to operation and planning. In this context, Virtual Power Players (VPP) can aggregate a diversity of players, namely generators and consumers, and a diversity of energy resources, including electricity generation based on several technologies, storage and demand response. Demand response market implementation has been done in recent years. Several implementation models have been considered. An important characteristic of a demand response program is the trigger criterion. A program for which the event trigger depends on the Locational Marginal Price (LMP) used by the New England Independent System operator (ISO-NE) inspired the present paper. This paper proposes a methodology to support VPP demand response programs management. The proposed method has been computationally implemented and its application is illustrated using a 32 bus network with intensive use of distributed generation. Results concerning the evaluation of the impact of using demand response events are also presented.
Resumo:
This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.
Resumo:
Presently power system operation produces huge volumes of data that is still treated in a very limited way. Knowledge discovery and machine learning can make use of these data resulting in relevant knowledge with very positive impact. In the context of competitive electricity markets these data is of even higher value making clear the trend to make data mining techniques application in power systems more relevant. This paper presents two cases based on real data, showing the importance of the use of data mining for supporting demand response and for supporting player strategic behavior.
Resumo:
A flow injection analysis (FIA) system having a chlormequat selective electrode is proposed. Several electrodes with poly(vinyl chloride) based membranes were constructed for this purpose. Comparative characterization suggestedthe use of membrane with chlormequat tetraphenylborate and dibutylphthalate. On a single-line FIA set-up, operating with 1x10-2 mol L-1 ionic strength and 6.3 pH, calibration curves presented slopes of 53.6±0.4mV decade-1 within 5.0x10-6 and1.0x10-3 mol L-1, andsquaredcorrelation coefficients >0.9953. The detection limit was 2.2x10-6 mol L-1 and the repeatability equal to ±0.68mV (0.7%). A dual-channel FIA manifold was therefore constructed, enabling automatic attainment of previous ionic strength andpH conditions and thus eliminating sample preparation steps. Slopes of 45.5±0.2mV decade -1 along a concentration range of 8.0x10-6 to 1.0x10-3 mol L-1 with a repeatability ±0.4mV (0.69%) were obtained. Analyses of real samples were performed, and recovery gave results ranging from 96.6 to 101.1%.
Resumo:
A preliminary version of this paper appeared in Proceedings of the 31st IEEE Real-Time Systems Symposium, 2010, pp. 239–248.
Resumo:
Field communication systems (fieldbuses) are widely used as the communication support for distributed computer-controlled systems (DCCS) within all sort of process control and manufacturing applications. There are several advantages in the use of fieldbuses as a replacement for the traditional point-to-point links between sensors/actuators and computer-based control systems, within which the most relevant is the decentralisation and distribution of the processing power over the field. A widely used fieldbus is the WorldFIP, which is normalised as European standard EN 50170. Using WorldFIP to support DCCS, an important issue is “how to guarantee the timing requirements of the real-time traffic?” WorldFIP has very interesting mechanisms to schedule data transfers, since it explicitly distinguishes periodic and aperiodic traffic. In this paper, we describe how WorldFIP handles these two types of traffic, and more importantly, we provide a comprehensive analysis on how to guarantee the timing requirements of the real-time traffic.
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective, feasible and usable system architectures that address both functional and non-functional requirements in an integrated fashion. In this paper, we outline the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to use standard commercially-available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. The EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
Componentised systems, in particular those with fault confinement through address spaces, are currently emerging as a hot topic in embedded systems research. This paper extends the unified rate-based scheduling framework RBED in several dimensions to fit the requirements of such systems: we have removed the requirement that the deadline of a task is equal to its period. The introduction of inter-process communication reflects the need to communicate. Additionally we also discuss server tasks, budget replenishment and the low level details needed to deal with the physical reality of systems. While a number of these issues have been studied in previous work in isolation, we focus on the problems discovered and lessons learned when integrating solutions. We report on our experiences implementing the proposed mechanisms in a commercial grade OKL4 microkernel as well as an application with soft real-time and best-effort tasks on top of it.
Resumo:
ARINC specification 653-2 describes the interface between application software and underlying middleware in a distributed real-time avionics system. The real-time workload in this system comprises of partitions, where each partition consists of one or more processes. Processes incur blocking and preemption overheads and can communicate with other processes in the system. In this work we develop compositional techniques for automated scheduling of such partitions and processes. At present, system designers manually schedule partitions based on interactions they have with the partition vendors. This approach is not only time consuming, but can also result in under utilization of resources. In contrast, the technique proposed in this paper is a principled approach for scheduling ARINC-653 partitions and therefore should facilitate system integration.
Resumo:
A new algorithm is proposed for scheduling preemptible arbitrary-deadline sporadic task systems upon multiprocessor platforms, with interprocessor migration permitted. This algorithm is based on a task-splitting approach - while most tasks are entirely assigned to specific processors, a few tasks (fewer than the number of processors) may be split across two processors. This algorithm can be used for two distinct purposes: for actually scheduling specific sporadic task systems, and for feasibility analysis. Simulation- based evaluation indicates that this algorithm offers a significant improvement on the ability to schedule arbitrary- deadline sporadic task systems as compared to the contemporary state-of-art. With regard to feasibility analysis, the new algorithm is proved to offer superior performance guarantees in comparison to prior feasibility tests.