980 resultados para event-driven
Resumo:
BACKGROUND: The most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems. Purpose The goal of this project was to describe and quantify the results of a study of decision support capabilities in Certification Commission for Health Information Technology (CCHIT) certified electronic health record systems. METHODS: The authors conducted a series of interviews with representatives of nine commercially available clinical information systems, evaluating their capabilities against 42 different clinical decision support features. RESULTS: Six of the nine reviewed systems offered all the applicable event-driven, action-oriented, real-time clinical decision support triggers required for initiating clinical decision support interventions. Five of the nine systems could access all the patient-specific data items identified as necessary. Six of the nine systems supported all the intervention types identified as necessary to allow clinical information systems to tailor their interventions based on the severity of the clinical situation and the user's workflow. Only one system supported all the offered choices identified as key to allowing physicians to take action directly from within the alert. Discussion The principal finding relates to system-by-system variability. The best system in our analysis had only a single missing feature (from 42 total) while the worst had eighteen.This dramatic variability in CDS capability among commercially available systems was unexpected and is a cause for concern. CONCLUSIONS: These findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.
Resumo:
The DNDC (DeNitrification and DeComposition) model was first developed by Li et al. (1992) as a rain event-driven process-orientated simulation model for nitrous oxide, carbon dioxide and nitrogen gas emissions from the agricultural soils in the U.S. Over the last 20 years, the model has been modified and adapted by various research groups around the world to suit specific purposes and circumstances. The Global Research Alliance Modelling Platform (GRAMP) is a UK-led initiative for the establishment of a purposeful and credible web-based platform initially aimed at users of the DNDC model. With the aim of improving the predictions of soil C and N cycling in the context of climate change the objectives of GRAMP are to: 1) to document the existing versions of the DNDC model; 2) to create a family tree of the individual DNDC versions; 3) to provide information on model use and development; and 4) to identify strengths, weaknesses and potential improvements for the model.
Resumo:
Business information has become a critical asset for companies and it has even more value when obtained and exploited in real time. This paper analyses how to integrate this information into an existing banking Enterprise Architecture, following an event-driven approach, and entails the study of three main issues: the definition of business events, the specification of a reference architecture, which identifies the specific integration points, and the description of a governance approach to manage the new elements. All the proposed solutions have been validated with a proof-of-concept test bed in an open source environment. It is based on a case study of the banking sector that allows an operational validation to be carried out, as well as ensuring compliance with non-functional requirements. We have focused these requirements on performance.
Resumo:
A medida que la sociedad avanza, la cantidad de datos almacenados en sistemas de información y procesados por las aplicaciones y servidores software se eleva exponencialmente. Además, las nuevas tecnologías han confiado su desarrollo en la red internacionalmente conectada: Internet. En consecuencia, se han aprovechado las conexiones máquina a máquina (M2M) mediante Internet y se ha desarrollado el concepto de "Internet de las Cosas", red de dispositivos y terminales donde cualquier objeto cotidiano puede establecer conexiones con otros objetos o con un teléfono inteligente mediante los servicios desplegados en dicha red. Sin embargo, estos nuevos datos y eventos se deben procesar en tiempo real y de forma eficaz, para reaccionar ante cualquier situación. Así, las arquitecturas orientadas a eventos solventan la comprensión del intercambio de mensajes en tiempo real. De esta forma, una EDA (Event-Driven Architecture) brinda la posibilidad de implementar una arquitectura software con una definición exhaustiva de los mensajes, notificándole al usuario los hechos que han ocurrido a su alrededor y las acciones tomadas al respecto. Este Trabajo Final de Grado se centra en el estudio de las arquitecturas orientadas a eventos, contrastándolas con el resto de los principales patrones arquitectónicos. Esta comparación se ha efectuado atendiendo a los requisitos no funcionales de cada uno, como, por ejemplo, la seguridad frente a amenazas externas. Asimismo, el objetivo principal es el estudio de las arquitecturas EDA (Event-Driven Architecture) y su relación con la red de Internet de las Cosas, que permite a cualquier dispositivo acceder a los servicios desplegados en esa red mediante Internet. El objeto del TFG es observar y verificar las ventajas de esta arquitectura, debido a su carácter de tipo inmediato, mediante el envío y recepción de mensajes en tiempo real y de forma asíncrona. También se ha realizado un estudio del estado del arte de estos patrones de arquitectura software, así como de la red de IoT (Internet of Things) y sus servicios. Por otro lado, junto con este TFG se ha desarrollado una simulación de una EDA completa, con todos sus elementos: productores, consumidores y procesador de eventos complejo, además de la visualización de los datos. Para ensalzar los servicios prestados por la red de IoT y su relación con una arquitectura EDA, se ha implementado una simulación de un servicio personalizado de Tele-asistencia. Esta prueba de concepto ha ayudado a reforzar el aprendizaje y entender con más precisión todo el conocimiento adquirido mediante el estudio teórico de una EDA. Se ha implementado en el lenguaje de programación Java, mediante las soluciones de código abierto RabbitMQ y Esper, ayudando a su unión el estándar AMQP, para completar correctamente la transferencia.
Resumo:
Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.
Resumo:
We reviewed the use of advanced display technologies for monitoring in anesthesia. Researchers are investigating displays that integrate information and that, in some cases, also deliver the results continuously to the anesthesiologist. Integrated visual displays reveal higher-order properties of patient state and speed in responding to events, but their benefits under an intensely timeshared load is unknown. Head-mounted displays seem to shorten the time to respond to changes, but their impact on peripheral awareness and attention is unknown. Continuous auditory displays extending pulse oximetry seem to shorten response times and improve the ability to time-share other tasks, but their integration into the already noisy operative environment still needs to be tested. We reviewed the advantages and disadvantages of the three approaches, drawing on findings from other fields, such as aviation, to suggest outcomes where there are still no results for the anesthesia context. Proving that advanced patient monitoring displays improve patient outcomes is difficult, and a more realistic goal is probably to prove that such displays lead to better situational awareness, earlier responding, and less workload, all of which keep anesthesia practice away from the outer boundaries of safe operation.
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
∗ Thematic Harmonisation in Electrical and Information EngineeRing in Europe,Project Nr. 10063-CP-1-2000-1-PT-ERASMUS-ETNE.
Resumo:
This paper is devoted to the learning of event programming by using Visual C# in specialized training in Informatics in high schools. Some basic tools and technologies for the implementation of graphics and animation in C# are discussed. Two example problems are proposed.
Resumo:
En el presente trabajo se aplica el perfil de complejidad al modelamiento de procesos de negocio, desarrollado a partir del formalismo de Event-driven Process Chain (EPC). Al ser el perfil de complejidad una función de la información mutua entre elementos de un sistema, para cada nivel de escala de este, su aplicación al modelamiento de procesos permitirá establecer cuánta información recogen estos y cómo se distribuye esta información en cada posible secuencia de sus subprocesos. Se explora, en un inicio, las características cualitativas que tiene dicho perfil según cómo se encuentre estructurado el diagrama y qué significado esconden las estructuras más básicas con las que puede construirse un proceso. En la medida en que el modelado de procesos se realiza con el objetivo de optimizar el funcionamiento de un negocio, la aplicación del perfil de complejidad podrá ofrecer también una medida cuantitativa con la cual establecer cuánto afecta en el resultado final la modificación de un subproceso.
Resumo:
Esta investigación tiene como objetivo determinar si los anuncios de cambios de CEO y presidentes de Directorio de las empresas listadas en la Bolsa de Valores de Lima (BVL) afectan el valor de la firma en los días cercanos al anuncio. Fue la pronta salida de Steve Jobs como CEO de Apple Inc., a causa de una enfermedad mortal, lo que nos generó el cuestionamiento respecto a cuál sería el desempeño que tienen las acciones cuyas compañías pasan por eventos similares. Como se sabe, el mercado castigó la acción de Apple el día de la muerte de Steve Jobs, con caídas superiores al 2% el día del anuncio. ¿Tendrían los mercados desarrollados y emergentes el mismo comportamiento?, ¿los eventos de cambio de CEO generan las mismas reacciones en los países emergentes? Grande fue nuestra sorpresa al observar, a nivel local, cambios en la gerencia general como en Backus & Johnston (3/9/2013) sin un efecto significativo en el mercado, pues incluso el mercado no negoció dicho valor hasta el 19/9/2013. Con el objetivo de obtener una respuesta a las consultas inicialmente planteadas, se aplicó la metodología denominada Event Analysis, la cual ya ha sido utilizada para evaluar la existencia de retornos anormales ante cambios en CEO y presidentes de Directorio en mercados desarrollados como EE. UU., Países Bajos, Australia, España, etc., y también en mercados emergentes como Colombia, Chile y México. Nuestro estudio para el mercado peruano consistió en una muestra conformada por las cincuenta empresas cuyas acciones son las de mayor frecuencia de negociación en la BVL. Se tomó en consideración todos los eventos de cambio de CEO y presidentes de Directorio entre 1992 y el 2014. De acuerdo con los resultados de la investigación realizada, la existencia de retornos anormales en los cambios de CEO y presidentes de Directorio no son estadísticamente significativos, por lo que no podrían ser usado para generar estrategias del tipo Event-driven por parte de los hedge funds. La razón predominante es la alta volatilidad de los resultados y la poca profundidad del mercado que cuenta con poca liquidez; asimismo, el hecho de tener eventos relevantes que no son tomados en cuenta por el mercado.
Resumo:
The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.
Resumo:
Over the past years, the paradigm of component-based software engineering has been established in the construction of complex mission-critical systems. Due to this trend, there is a practical need for techniques that evaluate critical properties (such as safety, reliability, availability or performance) of these systems. In this paper, we review several high-level techniques for the evaluation of safety properties for component-based systems and we propose a new evaluation model (State Event Fault Trees) that extends safety analysis towards a lower abstraction level. This model possesses a state-event semantics and strong encapsulation, which is especially useful for the evaluation of component-based software systems. Finally, we compare the techniques and give suggestions for their combined usage