926 resultados para Control-flow
Resumo:
Business Process Management describes a holistic management approach for the systematic design, modeling, execution, validation, monitoring and improvement of organizational business processes. Traditionally, most attention within this community has been given to control-flow aspects, i.e., the ordering and sequencing of business activities, oftentimes in isolation with regards to the context in which these activities occur. In this paper, we propose an approach that allows executable process models to be integrated with Geographic Information Systems. This approach enables process models to take geospatial and other geographic aspects into account in an explicit manner both during the modeling phase and the execution phase. We contribute a structured modeling methodology, based on the well-known Business Process Model and Notation standard, which is formalized by means of a mapping to executable Colored Petri nets. We illustrate the feasibility of our approach by means of a sustainability-focused case example of a process with important ecological concerns.
Resumo:
As a result of the more distributed nature of organisations and the inherently increasing complexity of their business processes, a significant effort is required for the specification and verification of those processes. The composition of the activities into a business process that accomplishes a specific organisational goal has primarily been a manual task. Automated planning is a branch of artificial intelligence (AI) in which activities are selected and organised by anticipating their expected outcomes with the aim of achieving some goal. As such, automated planning would seem to be a natural fit to the BPM domain to automate the specification of control flow. A number of attempts have been made to apply automated planning to the business process and service composition domain in different stages of the BPM lifecycle. However, a unified adoption of these techniques throughout the BPM lifecycle is missing. As such, we propose a new intention-centric BPM paradigm, which aims on minimising the specification effort by exploiting automated planning techniques to achieve a pre-stated goal. This paper provides a vision on the future possibilities of enhancing BPM using automated planning. A research agenda is presented, which provides an overview of the opportunities and challenges for the exploitation of automated planning in BPM.
Resumo:
By definition, regulatory rules (in legal context called norms) intend to achieve specific behaviour from business processes, and might be relevant to the whole or part of a business process. They can impose conditions on different aspects of process models, e.g., control-flow, data and resources etc. Based on the rules sets, norms can be classified into various classes and sub-classes according to their effects. This paper presents an abstract framework consisting of a list of norms and a generic compliance checking approach on the idea of (possible) execution of processes. The proposed framework is independent of any existing formalism, and provides a conceptually rich and exhaustive ontology and semantics of norms needed for business process compliance checking. The possible uses of the proposed framework include to compare different compliance management frameworks (CMFs).
Resumo:
In this paper we illustrate a set of features of the Apromore process model repository for analyzing business process variants. Two types of analysis are provided: one is static and based on differences on the process control flow, the other is dynamic and based on differences in the process behavior between the variants. These features combine techniques for the management of large process model collections with those for mining process knowledge from process execution logs. The tool demonstration will be useful for researchers and practitioners working on large process model collections and process execution logs, and specifically for those with an interest in understanding, managing and consolidating business process variants both within and across organizational boundaries.
Resumo:
Better operational control of water networks can help reduce leakage, maintain pressure, and control flow. Proportional integral derivative (PID) controllers, with proper fine-tuning, can help water utility operators achieve targets faster without creating undue transients. The authors compared three tuning methods, in different test situations, involving flow and level control to different reservoirs. Although target values were reached with all three tuning methods, the methods’ performances varied significantly. The lowest performer among the three was the method most widely used in the industry—standard tuning by the Ziegler-Nichols method. Achieving better results was offline tuning by genetic algorithms. Achieving the best control, though, was a fuzzy logic–based online tuning approach—the FZPID controller. The FZPID controller had fewer overshoots and took significantly less time to tune the gains for each problem. This new tuning approach for PID controllers can be applied to a variety of problems and can increase the performance of water networks of any size and structure
Resumo:
Donor-doped n-BaTiO3 polycrystalline ceramics show a strong negative temperature coefficient of resistivity below the orthorhombic-rhombohedral phase transition point, from 10(2-3) Omega cm af 190 K to 10(10-13) Omega cm at less than or similar to 50 K, with thermal coefficient of resistance alpha = 20-23% K-1. Stable thermal sensors for low-temperature applications are realized therefrom. The negative temperature coefficient of resistivity region can be modified by substituting isovalent ions in the lattice. Highly nonlinear current-voltage (I-V) curves are observed at low temperatures, with a voltage maximum followed by the negative differential resistance. The I-V curves are sensitive to dissipation so that cryogenic sensors can be fabricated for liquid level control, flow rate monitoring, radiation detection or in-rush voltage limitation.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
Transaction processing is a key constituent of the IT workload of commercial enterprises (e.g., banks, insurance companies). Even today, in many large enterprises, transaction processing is done by legacy "batch" applications, which run offline and process accumulated transactions. Developers acknowledge the presence of multiple loosely coupled pieces of functionality within individual applications. Identifying such pieces of functionality (which we call "services") is desirable for the maintenance and evolution of these legacy applications. This is a hard problem, which enterprises grapple with, and one without satisfactory automated solutions. In this paper, we propose a novel static-analysis-based solution to the problem of identifying services within transaction-processing programs. We provide a formal characterization of services in terms of control-flow and data-flow properties, which is well-suited to the idioms commonly exhibited by business applications. Our technique combines program slicing with the detection of conditional code regions to identify services in accordance with our characterization. A preliminary evaluation, based on a manual analysis of three real business programs, indicates that our approach can be effective in identifying useful services from batch applications.
Resumo:
控制流检测是抵御单粒子事件的有效手段之一.目前的主流方法是采用嵌入式签名技术, 但是该技术引入的检测指令过多, 导致程序效率低下. 本文使用基本块规约的技术, 在原基本块的基础上, 选择合适的约束量重新划分基本块, 减少引入的检测指令. 与8个常见算法的性能比较表明, 该方法在软错误检测覆盖率基本不变的前提下,能有效提高目标程序效率.
Resumo:
随着微电子器件复杂度的提高,空间辐射对于计算机程序的正确性影响正越来越明显。一般情况下,这些影响并不是永久的,而是瞬时故障。无论是太空中的信息处理系统、嵌入式实时控制系统,还是计算机集群、高性能超级计算机都可能由于错误的输出而导致灾难性的后果。 传统的可靠性系统采用抗辐射部件和冗余的硬件来达到可靠性的要求,但是其价格昂贵,性能落后于今天的商用部件(COTS)。针对COTS在容错能力上存在的不足,软件容错技术可以在不改变硬件结构的情况下,有效的提高计算机系统的可靠性。 瞬时故障在软件层面上主要表现为控制流错误和数据流错误,本文主要针对控制流错误进行容错处理。软件实现的控制流容错技术通过在编译时加入冗余的容错逻辑,在程序执行时进行控制流错误的检测和处理。 如何在保证容错能力的同时,尽量降低冗余逻辑所带来的系统开销,是控制流容错需要解决的主要问题。本文从控制流错误的基本概念,容错单元的选择,签名信息的建立,签名点和检测点的插入位置几个角度对控制流容错进行研究,主要内容有: 1.对常见的控制流容错方法进行了分析比较,对其优点和不足予以说明。 2.对控制流错误进行了分类,以此为基础,提出了基于相关前驱基本块的控制流容错方法(CFCLRB)。 3.提出了一种签名流模型,提出了基于签名流模型的控制流容错方法(CFCSF)。该方法能够对基本块间控制流错误进行检测,具有较低的时间开销、空间开销和较高的错误覆盖率。同时,该方法可以根据容错尺度的要求,灵活的插入和删除签名点与检测点,具有极强的扩展性。该方法还可以应对动态函数指针这种编译时难以确定的控制流情况。 4.基于汇编指令对上述方法予以实现,并实现了国际上常用的控制流容错方法Control Flow Checking by Software Signatures(CFCSS)和Control-flow Error Detection through Assertions(CEDA)做为对比。通过加入冗余的指令逻辑,完成了对原程序的容错功能。 5.基于PIN工具实现了对控制流错误的注入,在相同的实验环境下对CFCLRB ,CFCSF,CFCSS,CEDA进行了对比实验。实验表明, CFCLRB的时间开销为26.9%,空间开销为27.6%,相比不具容错能力的原程序,其错误覆盖率从66.50%提升到97.32%。CFCSF的时间开销为14.7%,空间开销为22.1%,相比不具容错能力的原程序,其错误覆盖率从66.50%提升到96.79%。相比CFCSS,该方法的时间开销从37.2%下降到14.7%,空间开销从31.2%下降到22.1%,错误覆盖率从95.16%提升到96.79%。相比CEDA,该方法的时间开销从26.9%下降到14.7%,空间开销从27.1%下降到22.1%,错误覆盖率仅从97.39%下降到96.79%。 最后,本文对控制流容错的未来研究方向进行了展望。
Resumo:
系统的高可靠性是研究航空航天领域的一个重要指标. 由于太空环境的特殊性, 辐射和高能粒子会造成计算机系统的出现瞬时性错误, 这种错误被称作软错误, 它对航空航天器件造成了很大的影响, 严重降低系统的可靠性. 检测和防护这种软错误是航空航天系统中的重要研究方向之一. 软错误的检测和防护包括硬件防护与检错, 软硬件混合检错以及纯软件检错等. 随着商用器件的广泛使用, 与之相配合的各种软错误软件检错方法开始得到深入的研究, 在各种软件检错方法中, 控制流检测是抵御单粒子事件的有效手段之一.目前的主流方法是采用嵌入式签名技术, 但是该技术引入的检测指令过多, 导致程序效率低下. 本文从总结控制流检测技术的共同点出发, 分析该技术导致效率低下的原因:由于基本块定义的约束导致程序中基本块过多, 进而在代码注入过程中引入过多的判断及跳转指令, 导致程序效率低下. 本文针对这种情况, 提出了一种基于源代码分析的基本块规约的方法. 该方法通过修改基本块定义的约束, 使在新的基本块定义下每个基本块能够容纳更多的指令, 减少检测指令的注入, 提高效率;并且在新的基本块定义下, 原来的控制流检错方法仍可以不加修改的直接应用于新的基本块定义上. 该方法能在不修改benchmark源代码以及控制流检测方法的基础上, 选择合适的约束量重新划分基本块, 减少引入的检测指令. 本文中使用该方法以ECCA, CFCSS和RSCFC三个控制流检错方法作为验证对象, 使用这3种控制流检错方法, 在不同的约束量作用下, 对8个常见算法的benchmark进行了软错误覆盖率测试和效率测试. 多次实验数据表明, 该方法在提高检错算法效率的同时, 能够保持软错误检错的覆盖率基本不变. 在对控制流检错算法进行优化的同时, 本文还完成了相应的控制流分析工具, 基于模拟器的错误注入和代码片段执行时间检测工具等. 有效的对优化算法进行了评估和测试.
Resumo:
We describe a GB parser implemented along the lines of those written by Fong [4] and Dorr [2]. The phrase structure recovery component is an implementation of Tomita's generalized LR parsing algorithm (described in [10]), with recursive control flow (similar to Fong's implementation). The major principles implemented are government, binding, bounding, trace theory, case theory, θ-theory, and barriers. The particular version of GB theory we use is that described by Haegeman [5]. The parser is minimal in the sense that it implements the major principles needed in a GB parser, and has fairly good coverage of linguistically interesting portions of the English language.
Resumo:
Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.
Resumo:
The inherent difficulty of thread-based shared-memory programming has recently motivated research in high-level, task-parallel programming models. Recent advances of Task-Parallel models add implicit synchronization, where the system automatically detects and satisfies data dependencies among spawned tasks. However, dynamic dependence analysis incurs significant runtime overheads, because the runtime must track task resources and use this information to schedule tasks while avoiding conflicts and races.
We present SCOOP, a compiler that effectively integrates static and dynamic analysis in code generation. SCOOP combines context-sensitive points-to, control-flow, escape, and effect analyses to remove redundant dependence checks at runtime. Our static analysis can work in combination with existing dynamic analyses and task-parallel runtimes that use annotations to specify tasks and their memory footprints. We use our static dependence analysis to detect non-conflicting tasks and an existing dynamic analysis to handle the remaining dependencies. We evaluate the resulting hybrid dependence analysis on a set of task-parallel programs.