870 resultados para Time Based Management (TBM)
Resumo:
In this paper, we present some of the fault tolerance management mechanisms being implemented in the Multi-μ architecture, namely its support for replica non-determinism. In this architecture, fault tolerance is achieved by node active replication, with software based replica management and fault tolerance transparent algorithms. A software layer implemented between the application and the real-time kernel, the Fault Tolerance Manager (FTManager), is the responsible for the transparent incorporation of the fault tolerance mechanisms The active replication model can be implemented either imposing replica determinism or keeping replica consistency at critical points, by means of interactive agreement mechanisms. One of the Multi-μ architecture goals is to identify such critical points, relieving the underlying system from performing the interactive agreement in every Ada dispatching point.
Resumo:
Moving towards autonomous operation and management of increasingly complex open distributed real-time systems poses very significant challenges. This is particularly true when reaction to events must be done in a timely and predictable manner while guaranteeing Quality of Service (QoS) constraints imposed by users, the environment, or applications. In these scenarios, the system should be able to maintain a global feasible QoS level while allowing individual nodes to autonomously adapt under different constraints of resource availability and input quality. This paper shows how decentralised coordination of a group of autonomous interdependent nodes can emerge with little communication, based on the robust self-organising principles of feedback. Positive feedback is used to reinforce the selection of the new desired global service solution, while negative feedback discourages nodes to act in a greedy fashion as this adversely impacts on the provided service levels at neighbouring nodes. The proposed protocol is general enough to be used in a wide range of scenarios characterised by a high degree of openness and dynamism where coordination tasks need to be time dependent. As the reported results demonstrate, it requires less messages to be exchanged and it is faster to achieve a globally acceptable near-optimal solution than other available approaches.
Resumo:
The new generations of SRAM-based FPGA (field programmable gate array) devices are the preferred choice for the implementation of reconfigurable computing platforms intended to accelerate processing in real-time systems. However, FPGA's vulnerability to hard and soft errors is a major weakness to robust configurable system design. In this paper, a novel built-in self-healing (BISH) methodology, based on run-time self-reconfiguration, is proposed. A soft microprocessor core implemented in the FPGA is responsible for the management and execution of all the BISH procedures. Fault detection and diagnosis is followed by repairing actions, taking advantage of the dynamic reconfiguration features offered by new FPGA families. Meanwhile, modular redundancy assures that the system still works correctly
Resumo:
The intensive use of distributed generation based on renewable resources increases the complexity of power systems management, particularly the short-term scheduling. Demand response, storage units and electric and plug-in hybrid vehicles also pose new challenges to the short-term scheduling. However, these distributed energy resources can contribute significantly to turn the shortterm scheduling more efficient and effective improving the power system reliability. This paper proposes a short-term scheduling methodology based on two distinct time horizons: hour-ahead scheduling, and real-time scheduling considering the point of view of one aggregator agent. In each scheduling process, it is necessary to update the generation and consumption operation, and the storage and electric vehicles status. Besides the new operation condition, more accurate forecast values of wind generation and consumption are available, for the resulting of short-term and very short-term methods. In this paper, the aggregator has the main goal of maximizing his profits while, fulfilling the established contracts with the aggregated and external players.
Resumo:
Recent and future changes in power systems, mainly in the smart grid operation context, are related to a high complexity of power networks operation. This leads to more complex communications and to higher network elements monitoring and control levels, both from network’s and consumers’ standpoint. The present work focuses on a real scenario of the LASIE laboratory, located at the Polytechnic of Porto. Laboratory systems are managed by the SCADA House Intelligent Management (SHIM), already developed by the authors based on a SCADA system. The SHIM capacities have been recently improved by including real-time simulation from Opal RT. This makes possible the integration of Matlab®/Simulink® real-time simulation models. The main goal of the present paper is to compare the advantages of the resulting improved system, while managing the energy consumption of a domestic consumer.
Resumo:
The use of Electric Vehicles (EVs) will change significantly the planning and management of power systems in a near future. This paper proposes a real-time tariff strategy for the charge process of the EVs. The main objective is to evaluate the influence of real-time tariffs in the EVs owners’ behaviour and also the impact in load diagram. The paper proposes the energy price variation according to the relation between wind generation and power consumption. The proposed strategy was tested in two different days in the Danish power system. January 31st and August 13th 2013 were selected because of the high quantities of wind generation. The main goal is to evaluate the changes in the EVs charging diagram with the energy price preventing wind curtailment.
Resumo:
It is imperative to accept that failures can and will occur, even in meticulously designed distributed systems, and design proper measures to counter those failures. Passive replication minimises resource consumption by only activating redundant replicas in case of failures, as typically providing and applying state updates is less resource demanding than requesting execution. However, most existing solutions for passive fault tolerance are usually designed and configured at design time, explicitly and statically identifying the most critical components and their number of replicas, lacking the needed flexibility to handle the runtime dynamics of distributed component-based embedded systems. This paper proposes a cost-effective adaptive fault tolerance solution with a significant lower overhead compared to a strict active redundancy-based approach, achieving a high error coverage with the minimum amount of redundancy. The activation of passive replicas is coordinated through a feedback-based coordination model that reduces the complexity of the needed interactions among components until a new collective global service solution is determined, improving the overall maintainability and robustness of the system.
Resumo:
Recent embedded processor architectures containing multiple heterogeneous cores and non-coherent caches renewed attention to the use of Software Transactional Memory (STM) as a building block for developing parallel applications. STM promises to ease concurrent and parallel software development, but relies on the possibility of abort conflicting transactions to maintain data consistency, which in turns affects the execution time of tasks carrying transactions. Because of this fact the timing behaviour of the task set may not be predictable, thus it is crucial to limit the execution time overheads resulting from aborts. In this paper we formalise a FIFO-based algorithm to order the sequence of commits of concurrent transactions. Then, we propose and evaluate two non-preemptive and one SRP-based fully-preemptive scheduling strategies, in order to avoid transaction starvation.
Resumo:
Report for the scientific sojourn at the University of California at Berkeley, USA, from september 2007 until july 2008. Communities of Learning Practice is an innovative paradigm focused on providing appropriate technological support to both formal and especially informal learning groups who are chiefly formed by non-technical people and who lack of the necessary resources to acquire such systems. Typically, students who are often separated by geography and/or time have the need to meet each other after classes in small study groups to carry out specific learning activities assigned during the formal learning process. However, the lack of suitable and available groupware applications makes it difficult for these groups of learners to collaborate and achieve their specific learning goals. In addition, the lack of democratic decision-making mechanisms is a main handicap to substitute the central authority of knowledge presented in formal learning.
Resumo:
When applying a Collaborative Learning Flow Pattern (CLFP) to structure sequences of activities in real contexts, one of the tasks is to organize groups of students according to the constraints imposed by the pattern. Sometimes,unexpected events occurring at runtime force this pre-defined distribution to be changed. In such situations, an adjustment of the group structures to be adapted to the new context is needed. If the collaborative pattern is complex, this group redefinitionmight be difficult and time consuming to be carried out in real time. In this context, technology can help on notifying the teacher which incompatibilitiesbetween the actual context and the constraints imposed by the pattern. This chapter presents a flexible solution for supporting teachers in the group organization profiting from the intrinsic constraints defined by a CLFPs codified in IMS Learning Design. A prototype of a web-based tool for the TAPPS and Jigsaw CLFPs and the preliminary results of a controlled user study are alsopresented as a first step towards flexible technological systems to support grouping tasks in this context.
Resumo:
OBJECTIVE: The European Panel on the Appropriateness of Crohn's disease Therapy (EPACT) has developed appropriateness criteria. We have applied these criteria retrospectively to the population-based inception cohort of Crohn's disease (CD) patients of the European Collaborative Study Group on Inflammatory Bowel Disease (EC-IBD). MATERIAL AND METHODS: A total of 426 diagnosed CD patients from 13 European centers were enrolled at the time of diagnosis (first flare, naive patients). We used the EPACT definitions to identify 247 patients with active luminal CD. We then assessed the appropriateness of the initial drug prescription according to the EPACT criteria. RESULTS: Among the cohort patients 163 suffered from mild-to-moderate CD and 84 from severe CD. Among the mild-to-moderate disease group, 96 patients (59%) received an appropriate treatment, whereas for 66 patients (40%) the treatment was uncertain and in one case (1%) inappropriate. Among the severe disease group, 86% were treated medically and 14% required surgery. 59 (70%) were appropriately treated, whereas for one patient (1%) the procedure was considered uncertain and for 24 patients (29%) inappropriate. CONCLUSION: Initial treatment was appropriate in the majority of cases for non-complicated luminal CD. Inappropriate or uncertain treatment was given in a significant minority of patients, with an increased potential risk of adverse events.
Resumo:
Geographic information systems (GIS) and artificial intelligence (AI) techniques were used to develop an intelligent snow removal asset management system (SRAMS). The system has been evaluated through a case study examining snow removal from the roads in Black Hawk County, Iowa, for which the Iowa Department of Transportation (Iowa DOT) is responsible. The SRAMS is comprised of an expert system that contains the logical rules and expertise of the Iowa DOT’s snow removal experts in Black Hawk County, and a geographic information system to access and manage road data. The system is implemented on a mid-range PC by integrating MapObjects 2.1 (a GIS package), Visual Rule Studio 2.2 (an AI shell), and Visual Basic 6.0 (a programming tool). The system could efficiently be used to generate prioritized snowplowing routes in visual format, to optimize the allocation of assets for plowing, and to track materials (e.g., salt and sand). A test of the system reveals an improvement in snowplowing time by 1.9 percent for moderate snowfall and 9.7 percent for snowstorm conditions over the current manual system.
Resumo:
This project develops a smartphone-based prototype system that supplements the 511 system to improve its dynamic traffic routing service to state highway users under non-recurrent congestion. This system will save considerable time to provide crucial traffic information and en-route assistance to travelers for them to avoid being trapped in traffic congestion due to accidents, work zones, hazards, or special events. It also creates a feedback loop between travelers and responsible agencies that enable the state to effectively collect, fuse, and analyze crowd-sourced data for next-gen transportation planning and management. This project can result in substantial economic savings (e.g. less traffic congestion, reduced fuel wastage and emissions) and safety benefits for the freight industry and society due to better dissemination of real-time traffic information by highway users. Such benefits will increase significantly in future with the expected increase in freight traffic on the network. The proposed system also has the flexibility to be integrated with various transportation management modules to assist state agencies to improve transportation services and daily operations.
Resumo:
Diplomityössä on tutkittu reaaliaikaisen toimintolaskennan toteuttamista suomalaisen lasersiruja valmistavan PK-yrityksen tietojärjestelmään. Lisäksi on tarkasteltu toimintolaskennan vaikutuksia operatiiviseen toimintaan sekä toimintojen johtamiseen. Työn kirjallisuusosassa on käsitelty kirjallisuuslähteiden perusteella toimintolaskennan teorioita, laskentamenetelmiä sekä teknisessä toteutuksessa käytettyjä teknologioita. Työn toteutusosassa suunniteltiin ja toteutettiin WWW-pohjainen toimintolaskentajärjestelmä case-yrityksen kustannuslaskennan sekä taloushallinnon avuksi. Työkalu integroitiin osaksi yrityksen toiminnanohjaus- sekä valmistuksenohjausjärjestelmää. Perinteisiin toimintolaskentamallien tiedonkeruujärjestelmiin verrattuna case-yrityksessä syötteet toimintolaskentajärjestelmälle tulevat reaaliaikaisesti osana suurempaa tietojärjestelmäintegraatiota.Diplomityö pyrkii luomaan suhteen toimintolaskennan vaatimusten ja tietokantajärjestelmien välille. Toimintolaskentajärjestelmää yritys voi hyödyntää esimerkiksi tuotteiden hinnoittelussa ja kustannuslaskennassa näkemällä tuotteisiin liittyviä kustannuksia eri näkökulmista. Päätelmiä voidaan tehdä tarkkaan kustannusinformaatioon perustuen sekä määrittää järjestelmän tuottaman datan perusteella, onko tietyn projektin, asiakkuuden tai tuotteen kehittäminen taloudellisesti kannattavaa.
Resumo:
Tämän tutkimustyön kohteena on TietoEnator Oy:n kehittämän Fenix-tietojärjestelmän kapasiteettitarpeen ennustaminen. Työn tavoitteena on tutustua Fenix-järjestelmän eri osa-alueisiin, löytää tapa eritellä ja mallintaa eri osa-alueiden vaikutus järjestelmän kuormitukseen ja selvittää alustavasti mitkä parametrit vaikuttavat kyseisten osa-alueiden luomaan kuormitukseen. Osa tätä työtä on tutkia eri vaihtoehtoja simuloinnille ja selvittää eri vaihtoehtojen soveltuvuus monimutkaisten järjestelmien mallintamiseen. Kerätyn tiedon pohjaltaluodaan järjestelmäntietovaraston kuormitusta kuvaava simulaatiomalli. Hyödyntämällä mallista saatua tietoa ja tuotantojärjestelmästä mitattua tietoa mallia kehitetään vastaamaan yhä lähemmin todellisen järjestelmän toimintaa. Mallista tarkastellaan esimerkiksi simuloitua järjestelmäkuormaa ja jonojen käyttäytymistä. Tuotantojärjestelmästä mitataan eri kuormalähteiden käytösmuutoksia esimerkiksi käyttäjämäärän ja kellonajan suhteessa. Tämän työn tulosten on tarkoitus toimia pohjana myöhemmin tehtävälle jatkotutkimukselle, jossa osa-alueiden parametrisointia tarkennetaan lisää, mallin kykyä kuvata todellista järjestelmää tehostetaanja mallin laajuutta kasvatetaan.