964 resultados para Event-Driven Programming
Resumo:
BACKGROUND: Ischemic stroke is the leading cause of mortality worldwide and a major contributor to neurological disability and dementia. Terutroban is a specific TP receptor antagonist with antithrombotic, antivasoconstrictive, and antiatherosclerotic properties, which may be of interest for the secondary prevention of ischemic stroke. This article describes the rationale and design of the Prevention of cerebrovascular and cardiovascular Events of ischemic origin with teRutroban in patients with a history oF ischemic strOke or tRansient ischeMic Attack (PERFORM) Study, which aims to demonstrate the superiority of the efficacy of terutroban versus aspirin in secondary prevention of cerebrovascular and cardiovascular events. METHODS AND RESULTS: The PERFORM Study is a multicenter, randomized, double-blind, parallel-group study being carried out in 802 centers in 46 countries. The study population includes patients aged > or =55 years, having suffered an ischemic stroke (< or =3 months) or a transient ischemic attack (< or =8 days). Participants are randomly allocated to terutroban (30 mg/day) or aspirin (100 mg/day). The primary efficacy endpoint is a composite of ischemic stroke (fatal or nonfatal), myocardial infarction (fatal or nonfatal), or other vascular death (excluding hemorrhagic death of any origin). Safety is being evaluated by assessing hemorrhagic events. Follow-up is expected to last for 2-4 years. Assuming a relative risk reduction of 13%, the expected number of primary events is 2,340. To obtain statistical power of 90%, this requires inclusion of at least 18,000 patients in this event-driven trial. The first patient was randomized in February 2006. CONCLUSIONS: The PERFORM Study will explore the benefits and safety of terutroban in secondary cardiovascular prevention after a cerebral ischemic event.
Resumo:
The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare, environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols.
Resumo:
BACKGROUND Idiopathic pulmonary fibrosis (IPF) is characterized by formation and proliferation of fibroblast foci. Endothelin-1 induces lung fibroblast proliferation and contractile activity via the endothelin A (ETA) receptor. OBJECTIVE To determine whether ambrisentan, an ETA receptor-selective antagonist, reduces the rate of IPF progression. DESIGN Randomized, double-blind, placebo-controlled, event-driven trial. (ClinicalTrials.gov: NCT00768300). SETTING Academic and private hospitals. PARTICIPANTS Patients with IPF aged 40 to 80 years with minimal or no honeycombing on high-resolution computed tomography scans. INTERVENTION Ambrisentan, 10 mg/d, or placebo. MEASUREMENTS Time to disease progression, defined as death, respiratory hospitalization, or a categorical decrease in lung function. RESULTS The study was terminated after enrollment of 492 patients (75% of intended enrollment; mean duration of exposure to study medication, 34.7 weeks) because an interim analysis indicated a low likelihood of showing efficacy for the end point by the scheduled end of the study. Ambrisentan-treated patients were more likely to meet the prespecified criteria for disease progression (90 [27.4%] vs. 28 [17.2%] patients; P = 0.010; hazard ratio, 1.74 [95% CI, 1.14 to 2.66]). Lung function decline was seen in 55 (16.7%) ambrisentan-treated patients and 19 (11.7%) placebo-treated patients (P = 0.109). Respiratory hospitalizations were seen in 44 (13.4%) and 9 (5.5%) patients in the ambrisentan and placebo groups, respectively (P = 0.007). Twenty-six (7.9%) patients who received ambrisentan and 6 (3.7%) who received placebo died (P = 0.100). Thirty-two (10%) ambrisentan-treated patients and 16 (10%) placebo-treated patients had pulmonary hypertension at baseline, and analysis stratified by the presence of pulmonary hypertension revealed similar results for the primary end point. LIMITATION The study was terminated early. CONCLUSION Ambrisentan was not effective in treating IPF and may be associated with an increased risk for disease progression and respiratory hospitalizations. PRIMARY FUNDING SOURCE Gilead Sciences.
Resumo:
Calmodulin (CaM) is a ubiquitous Ca(2+) buffer and second messenger that affects cellular function as diverse as cardiac excitability, synaptic plasticity, and gene transcription. In CA1 pyramidal neurons, CaM regulates two opposing Ca(2+)-dependent processes that underlie memory formation: long-term potentiation (LTP) and long-term depression (LTD). Induction of LTP and LTD require activation of Ca(2+)-CaM-dependent enzymes: Ca(2+)/CaM-dependent kinase II (CaMKII) and calcineurin, respectively. Yet, it remains unclear as to how Ca(2+) and CaM produce these two opposing effects, LTP and LTD. CaM binds 4 Ca(2+) ions: two in its N-terminal lobe and two in its C-terminal lobe. Experimental studies have shown that the N- and C-terminal lobes of CaM have different binding kinetics toward Ca(2+) and its downstream targets. This may suggest that each lobe of CaM differentially responds to Ca(2+) signal patterns. Here, we use a novel event-driven particle-based Monte Carlo simulation and statistical point pattern analysis to explore the spatial and temporal dynamics of lobe-specific Ca(2+)-CaM interaction at the single molecule level. We show that the N-lobe of CaM, but not the C-lobe, exhibits a nano-scale domain of activation that is highly sensitive to the location of Ca(2+) channels, and to the microscopic injection rate of Ca(2+) ions. We also demonstrate that Ca(2+) saturation takes place via two different pathways depending on the Ca(2+) injection rate, one dominated by the N-terminal lobe, and the other one by the C-terminal lobe. Taken together, these results suggest that the two lobes of CaM function as distinct Ca(2+) sensors that can differentially transduce Ca(2+) influx to downstream targets. We discuss a possible role of the N-terminal lobe-specific Ca(2+)-CaM nano-domain in CaMKII activation required for the induction of synaptic plasticity.
Resumo:
BACKGROUND: The most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems. Purpose The goal of this project was to describe and quantify the results of a study of decision support capabilities in Certification Commission for Health Information Technology (CCHIT) certified electronic health record systems. METHODS: The authors conducted a series of interviews with representatives of nine commercially available clinical information systems, evaluating their capabilities against 42 different clinical decision support features. RESULTS: Six of the nine reviewed systems offered all the applicable event-driven, action-oriented, real-time clinical decision support triggers required for initiating clinical decision support interventions. Five of the nine systems could access all the patient-specific data items identified as necessary. Six of the nine systems supported all the intervention types identified as necessary to allow clinical information systems to tailor their interventions based on the severity of the clinical situation and the user's workflow. Only one system supported all the offered choices identified as key to allowing physicians to take action directly from within the alert. Discussion The principal finding relates to system-by-system variability. The best system in our analysis had only a single missing feature (from 42 total) while the worst had eighteen.This dramatic variability in CDS capability among commercially available systems was unexpected and is a cause for concern. CONCLUSIONS: These findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.
Resumo:
The DNDC (DeNitrification and DeComposition) model was first developed by Li et al. (1992) as a rain event-driven process-orientated simulation model for nitrous oxide, carbon dioxide and nitrogen gas emissions from the agricultural soils in the U.S. Over the last 20 years, the model has been modified and adapted by various research groups around the world to suit specific purposes and circumstances. The Global Research Alliance Modelling Platform (GRAMP) is a UK-led initiative for the establishment of a purposeful and credible web-based platform initially aimed at users of the DNDC model. With the aim of improving the predictions of soil C and N cycling in the context of climate change the objectives of GRAMP are to: 1) to document the existing versions of the DNDC model; 2) to create a family tree of the individual DNDC versions; 3) to provide information on model use and development; and 4) to identify strengths, weaknesses and potential improvements for the model.
Resumo:
Business information has become a critical asset for companies and it has even more value when obtained and exploited in real time. This paper analyses how to integrate this information into an existing banking Enterprise Architecture, following an event-driven approach, and entails the study of three main issues: the definition of business events, the specification of a reference architecture, which identifies the specific integration points, and the description of a governance approach to manage the new elements. All the proposed solutions have been validated with a proof-of-concept test bed in an open source environment. It is based on a case study of the banking sector that allows an operational validation to be carried out, as well as ensuring compliance with non-functional requirements. We have focused these requirements on performance.
Resumo:
A medida que la sociedad avanza, la cantidad de datos almacenados en sistemas de información y procesados por las aplicaciones y servidores software se eleva exponencialmente. Además, las nuevas tecnologías han confiado su desarrollo en la red internacionalmente conectada: Internet. En consecuencia, se han aprovechado las conexiones máquina a máquina (M2M) mediante Internet y se ha desarrollado el concepto de "Internet de las Cosas", red de dispositivos y terminales donde cualquier objeto cotidiano puede establecer conexiones con otros objetos o con un teléfono inteligente mediante los servicios desplegados en dicha red. Sin embargo, estos nuevos datos y eventos se deben procesar en tiempo real y de forma eficaz, para reaccionar ante cualquier situación. Así, las arquitecturas orientadas a eventos solventan la comprensión del intercambio de mensajes en tiempo real. De esta forma, una EDA (Event-Driven Architecture) brinda la posibilidad de implementar una arquitectura software con una definición exhaustiva de los mensajes, notificándole al usuario los hechos que han ocurrido a su alrededor y las acciones tomadas al respecto. Este Trabajo Final de Grado se centra en el estudio de las arquitecturas orientadas a eventos, contrastándolas con el resto de los principales patrones arquitectónicos. Esta comparación se ha efectuado atendiendo a los requisitos no funcionales de cada uno, como, por ejemplo, la seguridad frente a amenazas externas. Asimismo, el objetivo principal es el estudio de las arquitecturas EDA (Event-Driven Architecture) y su relación con la red de Internet de las Cosas, que permite a cualquier dispositivo acceder a los servicios desplegados en esa red mediante Internet. El objeto del TFG es observar y verificar las ventajas de esta arquitectura, debido a su carácter de tipo inmediato, mediante el envío y recepción de mensajes en tiempo real y de forma asíncrona. También se ha realizado un estudio del estado del arte de estos patrones de arquitectura software, así como de la red de IoT (Internet of Things) y sus servicios. Por otro lado, junto con este TFG se ha desarrollado una simulación de una EDA completa, con todos sus elementos: productores, consumidores y procesador de eventos complejo, además de la visualización de los datos. Para ensalzar los servicios prestados por la red de IoT y su relación con una arquitectura EDA, se ha implementado una simulación de un servicio personalizado de Tele-asistencia. Esta prueba de concepto ha ayudado a reforzar el aprendizaje y entender con más precisión todo el conocimiento adquirido mediante el estudio teórico de una EDA. Se ha implementado en el lenguaje de programación Java, mediante las soluciones de código abierto RabbitMQ y Esper, ayudando a su unión el estándar AMQP, para completar correctamente la transferencia.
Resumo:
Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.
Resumo:
We reviewed the use of advanced display technologies for monitoring in anesthesia. Researchers are investigating displays that integrate information and that, in some cases, also deliver the results continuously to the anesthesiologist. Integrated visual displays reveal higher-order properties of patient state and speed in responding to events, but their benefits under an intensely timeshared load is unknown. Head-mounted displays seem to shorten the time to respond to changes, but their impact on peripheral awareness and attention is unknown. Continuous auditory displays extending pulse oximetry seem to shorten response times and improve the ability to time-share other tasks, but their integration into the already noisy operative environment still needs to be tested. We reviewed the advantages and disadvantages of the three approaches, drawing on findings from other fields, such as aviation, to suggest outcomes where there are still no results for the anesthesia context. Proving that advanced patient monitoring displays improve patient outcomes is difficult, and a more realistic goal is probably to prove that such displays lead to better situational awareness, earlier responding, and less workload, all of which keep anesthesia practice away from the outer boundaries of safe operation.
Resumo:
In this paper we describe a novel, extensible visualization system currently under development at Aston University. We introduce modern programming methods, such as the use of data driven programming, design patterns, and the careful definition of interfaces to allow easy extension using plug-ins, to 3D landscape visualization software. We combine this with modern developments in computer graphics, such as vertex and fragment shaders, to create an extremely flexible, extensible real-time near photorealistic visualization system. In this paper we show the design of the system and the main sub-components. We stress the role of modern programming practices and illustrate the benefits these bring to 3D visualization. © 2006 Springer-Verlag Berlin Heidelberg.
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
En el presente trabajo se aplica el perfil de complejidad al modelamiento de procesos de negocio, desarrollado a partir del formalismo de Event-driven Process Chain (EPC). Al ser el perfil de complejidad una función de la información mutua entre elementos de un sistema, para cada nivel de escala de este, su aplicación al modelamiento de procesos permitirá establecer cuánta información recogen estos y cómo se distribuye esta información en cada posible secuencia de sus subprocesos. Se explora, en un inicio, las características cualitativas que tiene dicho perfil según cómo se encuentre estructurado el diagrama y qué significado esconden las estructuras más básicas con las que puede construirse un proceso. En la medida en que el modelado de procesos se realiza con el objetivo de optimizar el funcionamiento de un negocio, la aplicación del perfil de complejidad podrá ofrecer también una medida cuantitativa con la cual establecer cuánto afecta en el resultado final la modificación de un subproceso.