955 resultados para event-driven simulation
Resumo:
Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model.
Resumo:
The DNDC (DeNitrification and DeComposition) model was first developed by Li et al. (1992) as a rain event-driven process-orientated simulation model for nitrous oxide, carbon dioxide and nitrogen gas emissions from the agricultural soils in the U.S. Over the last 20 years, the model has been modified and adapted by various research groups around the world to suit specific purposes and circumstances. The Global Research Alliance Modelling Platform (GRAMP) is a UK-led initiative for the establishment of a purposeful and credible web-based platform initially aimed at users of the DNDC model. With the aim of improving the predictions of soil C and N cycling in the context of climate change the objectives of GRAMP are to: 1) to document the existing versions of the DNDC model; 2) to create a family tree of the individual DNDC versions; 3) to provide information on model use and development; and 4) to identify strengths, weaknesses and potential improvements for the model.
Resumo:
Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.
Resumo:
Heterogeneity has to be taken into account when integrating a set of existing information sources into a distributed information system that are nowadays often based on Service- Oriented Architectures (SOA). This is also particularly applicable to distributed services such as event monitoring, which are useful in the context of Event Driven Architectures (EDA) and Complex Event Processing (CEP). Web services deal with this heterogeneity at a technical level, also providing little support for event processing. Our central thesis is that such a fully generic solution cannot provide complete support for event monitoring; instead, source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Our core result is the design of a configurable event monitoring (Web) service that allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.
Resumo:
Projecto para obtenção do grau de Mestre em Engenharia Informática e de computadores
Resumo:
Using event-driven molecular dynamics simulations, we study a three-dimensional one-component system of spherical particles interacting via a discontinuous potential combining a repulsive square soft core and an attractive square well. In the case of a narrow attractive well, it has been shown that this potential has two metastable gas-liquid critical points. Here we systematically investigate how the changes of the parameters of this potential affect the phase diagram of the system. We find a broad range of potential parameters for which the system has both a gas-liquid critical point C1 and a liquid-liquid critical point C2. For the liquid-gas critical point we find that the derivatives of the critical temperature and pressure, with respect to the parameters of the potential, have the same signs: they are positive for increasing width of the attractive well and negative for increasing width and repulsive energy of the soft core. This result resembles the behavior of the liquid-gas critical point for standard liquids. In contrast, for the liquid-liquid critical point the critical pressure decreases as the critical temperature increases. As a consequence, the liquid-liquid critical point exists at positive pressures only in a finite range of parameters. We present a modified van der Waals equation which qualitatively reproduces the behavior of both critical points within some range of parameters, and gives us insight on the mechanisms ruling the dependence of the two critical points on the potential¿s parameters. The soft-core potential studied here resembles model potentials used for colloids, proteins, and potentials that have been related to liquid metals, raising an interesting possibility that a liquid-liquid phase transition may be present in some systems where it has not yet been observed.
Resumo:
BACKGROUND: Ischemic stroke is the leading cause of mortality worldwide and a major contributor to neurological disability and dementia. Terutroban is a specific TP receptor antagonist with antithrombotic, antivasoconstrictive, and antiatherosclerotic properties, which may be of interest for the secondary prevention of ischemic stroke. This article describes the rationale and design of the Prevention of cerebrovascular and cardiovascular Events of ischemic origin with teRutroban in patients with a history oF ischemic strOke or tRansient ischeMic Attack (PERFORM) Study, which aims to demonstrate the superiority of the efficacy of terutroban versus aspirin in secondary prevention of cerebrovascular and cardiovascular events. METHODS AND RESULTS: The PERFORM Study is a multicenter, randomized, double-blind, parallel-group study being carried out in 802 centers in 46 countries. The study population includes patients aged > or =55 years, having suffered an ischemic stroke (< or =3 months) or a transient ischemic attack (< or =8 days). Participants are randomly allocated to terutroban (30 mg/day) or aspirin (100 mg/day). The primary efficacy endpoint is a composite of ischemic stroke (fatal or nonfatal), myocardial infarction (fatal or nonfatal), or other vascular death (excluding hemorrhagic death of any origin). Safety is being evaluated by assessing hemorrhagic events. Follow-up is expected to last for 2-4 years. Assuming a relative risk reduction of 13%, the expected number of primary events is 2,340. To obtain statistical power of 90%, this requires inclusion of at least 18,000 patients in this event-driven trial. The first patient was randomized in February 2006. CONCLUSIONS: The PERFORM Study will explore the benefits and safety of terutroban in secondary cardiovascular prevention after a cerebral ischemic event.
Resumo:
BACKGROUND: Rivaroxaban, an oral factor Xa inhibitor, may provide a simple, fixed-dose regimen for treating acute deep-vein thrombosis (DVT) and for continued treatment, without the need for laboratory monitoring. METHODS: We conducted an open-label, randomized, event-driven, noninferiority study that compared oral rivaroxaban alone (15 mg twice daily for 3 weeks, followed by 20 mg once daily) with subcutaneous enoxaparin followed by a vitamin K antagonist (either warfarin or acenocoumarol) for 3, 6, or 12 months in patients with acute, symptomatic DVT. In parallel, we carried out a double-blind, randomized, event-driven superiority study that compared rivaroxaban alone (20 mg once daily) with placebo for an additional 6 or 12 months in patients who had completed 6 to 12 months of treatment for venous thromboembolism. The primary efficacy outcome for both studies was recurrent venous thromboembolism. The principal safety outcome was major bleeding or clinically relevant nonmajor bleeding in the initial-treatment study and major bleeding in the continued-treatment study. RESULTS: The study of rivaroxaban for acute DVT included 3449 patients: 1731 given rivaroxaban and 1718 given enoxaparin plus a vitamin K antagonist. Rivaroxaban had noninferior efficacy with respect to the primary outcome (36 events [2.1%], vs. 51 events with enoxaparin-vitamin K antagonist [3.0%]; hazard ratio, 0.68; 95% confidence interval [CI], 0.44 to 1.04; P<0.001). The principal safety outcome occurred in 8.1% of the patients in each group. In the continued-treatment study, which included 602 patients in the rivaroxaban group and 594 in the placebo group, rivaroxaban had superior efficacy (8 events [1.3%], vs. 42 with placebo [7.1%]; hazard ratio, 0.18; 95% CI, 0.09 to 0.39; P<0.001). Four patients in the rivaroxaban group had nonfatal major bleeding (0.7%), versus none in the placebo group (P=0.11). CONCLUSIONS: Rivaroxaban offers a simple, single-drug approach to the short-term and continued treatment of venous thrombosis that may improve the benefit-to-risk profile of anticoagulation. (Funded by Bayer Schering Pharma and Ortho-McNeil; ClinicalTrials.gov numbers, NCT00440193 and NCT00439725.).
Resumo:
In this thesis concurrent communication event handling is implemented using thread pool approach. Concurrent events are handled with a Reactor design pattern and multithreading is implemented using a Leader/Followers design pattern. Main focus is to evaluate behaviour of implemented model by different numbers of concurrent connections and amount of used threads. Furthermore, model feasibility in a PeerHood middleware is evaluated. Implemented model is evaluated with created test environment which enables concurrent message sending from multiple connections to the system under test. Messages round trip times are measured in the tester application. In the evaluation processing delay into system is simulated and influence of delay to the average round trip time is analysed.
Resumo:
Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.
Resumo:
The theoretical research of the study focused to business process management and business process modeling, the goal was to found a new business process modeling method for electrical accessories manufacturing enterprise. The focus was to find few options for business process modeling methods where company could have chosen the best one for its needs The study was carried out as a qualitative research with an action study and a case study as the most important ways collect data. In the empirical part of the study examples of company’s processes modeled with the new modeling method and process modeling process are presented. The new way of modeling processes improves especially visual presentation of the processes and improves the understanding how employees should work in the organizational interfaces of the process and in the interfaces between different processes. The results of the study is a new unified way to model company’s processes, which makes it easier to understand and create the process models. This improved readability makes it possible to reduce the costs that were created from the unclear old process models.
Resumo:
The purpose of the thesis is to examine the long-term performance persistence and relative performance of hedge funds during bear and bull market periods. Performance metrics applied for fund rankings are raw return, Sharpe ratio, mean variance ratio and strategy distinctiveness index calculated of the original and clustered data correspondingly. Four different length combinations for selection and holding periods are employed. The persistence is examined using decile and quartile portfolio formatting approach and on the basis of Sharpe ratio and SKASR as performance metrics. The relative performance persistence is examined by comparing hedge portfolio returns during varying stock market conditions. The data is gathered from a private database covering 10,789 hedge funds and time horizon is set from January 1990 to December 2012. The results of this thesis suggest that long-term performance persistence of the hedge funds exists. The degree of persistence also depends on the performance metrics employed and length combination of selection and holding periods. The best results of performance persistence were obtained in the decile portfolio analysis on the basis of Sharpe ratio rankings for combination of 12-month selection period and the holding period of equal length. The results also suggest that the best performance persistence occurs in the Event Driven and Multi strategies. Dummy regression analysis shows that a relationship between hedge funds and stock market returns exists. Based on the results, Dedicated Short Bias, Global Macro, Managed Futures and Other strategies perform well during bear market periods. The results also indicate that the Market Neutral strategy is not absolutely market neutral and the Event Driven strategy has the best performance among all hedge strategies.
Resumo:
Työn teoriaosuudessa tutkittiin prosessien uudelleen suunnittelua, prosessien mallintamista sekä prosessimittariston rakentamista. Työn tavoitteena oli uudelleen suunnitella organisaation sertifiointiprosessi. Tämän tavoitteen saavuttamiseksi piti mallintaa nykyinen ja uusi prosessi sekä rakentaa mittaristo, joka antaisi organisaatiolle arvokasta tietoa siitä, kuinka tehokkaasti uusi prosessi toimii. Työ suoritettiin osallistuvana toimintatutkimuksena. Diplomityön tekijä oli toiminut kohdeorganisaatiossa työntekijänä jo useita vuosia ja pystyi näinollen hyödyntämään omaa tietämystään sekä nykyisen prosessin mallintamisessa, että uuden prosessin suunnittelussa. Työn tuloksena syntyi uusi sertifiointiprosessi, joka on karsitumpi ja tehokkaampi kuin edeltäjänsä. Uusi mittaristojärjestelmä rakennettiin, jota organisaation johto kykenisi seuraamaan prosessin sidosryhmien tehokkuutta sekä tuotteiden laadun kehitystä. Sivutuotteena organisaatio sai käyttöönsä yksityiskohtaiset prosessikuvaukset, joita voidaan hyödyntää koulutusmateriaalina uutta henkilöstöä rekrytoitaessa sekä informatiivisena työkaluna esiteltäessä prosessia virallisille sertifiointitahoille.
Resumo:
The perovskite crystal structure is host to many different materials from insulating to superconducting providing a diverse range of intrinsic character and complexity. A better fundamental description of these materials in terms of their electronic, optical and magnetic properties undoubtedly precedes an effective realization of their application potential. SmTiOa, a distorted perovskite has a strongly localized electronic structure and undergoes an antiferromagnetic transition at 50 K in its nominally stoichiometric form. Sr2Ru04 is a layered perovskite superconductor (ie. Tc % 1 K) bearing the same structure as the high-tem|>erature superconductor La2_xSrrCu04. Polarized reflectance measurements were carried out on both of these materials revealing several interesting features in the far-infrared range of the spectrum. In the case of SmTiOa, although insulating, evidence indicates the presence of a finite background optical conductivity. As the temperature is lowered through the ordering temperature a resonance feature appears to narrow and strengthen near 120 cm~^ A nearby phonon mode appears to also couple to this magnetic transition as revealed by a growing asymmetry in the optica] conductivity. Experiments on a doped sample with a greater itinerant character and lower Neel temperature = 40 K also indicate the presence of this strongly temperature dependent mode even at twice the ordering temperature. Although the mode appears to be sensitive to the magnetic transition it is unclear whether a magnon assignment is appropriate. At very least, evidence suggests an interesting interaction between magnetic and electronic excitations. Although Sr2Ru04 is highly anisotropic it is metallic in three-dimensions at low temperatures and reveals its coherent transport in an inter-plane Drude-like component to the highest temperatures measured (ie. 90 K). An extended Drude analysis is used to probe the frequency dependent scattering character revealing a peak in both the mass enhancement and scattering rate near 80 cm~* and 100 cm~* respectively. All of these experimental observations appear relatively consistent with a Fermi-liquid picture of charge transport. To supplement the optical measurements a resistivity station was set up with an event driven object oriented user interface. The program controls a Keithley Current Source, HP Nano-Voltmeter and Switching Unit as well as a LakeShore Temperature Controller in order to obtain a plot of the Resistivity as a function of temperature. The system allows for resistivity measurements ranging from 4 K to 290 K using an external probe or between 0.4 K to 295 K using a Helium - 3 Cryostat. Several materials of known resistivity have confirmed the system to be robust and capable of measuring metallic samples distinguishing features of several fiQ-cm.
Resumo:
This paper describes a new statistical, model-based approach to building a contact state observer. The observer uses measurements of the contact force and position, and prior information about the task encoded in a graph, to determine the current location of the robot in the task configuration space. Each node represents what the measurements will look like in a small region of configuration space by storing a predictive, statistical, measurement model. This approach assumes that the measurements are statistically block independent conditioned on knowledge of the model, which is a fairly good model of the actual process. Arcs in the graph represent possible transitions between models. Beam Viterbi search is used to match measurement history against possible paths through the model graph in order to estimate the most likely path for the robot. The resulting approach provides a new decision process that can be use as an observer for event driven manipulation programming. The decision procedure is significantly more robust than simple threshold decisions because the measurement history is used to make decisions. The approach can be used to enhance the capabilities of autonomous assembly machines and in quality control applications.