918 resultados para control-flow checking
Resumo:
Norms regulate the behaviour of their subjects and define what is legal and what is illegal. Norms typically describe the conditions under which they are applicable and the normative effects as a results of their applications. On the other hand, process models specify how a business operation or service is to be carried out to achieve a desired outcome. Norms can have significant impact on how business operations are conducted and they can apply to the whole or part of a business process. For example, they may impose conditions on the different aspects of a process (e.g., perform tasks in a specific sequence (control-flow), at a specific time or within a certain time frame (temporal aspect), by specific people (resources)). We propose a framework that provides the formal semantics of the normative requirements for determining whether a business process complies with a normative document (where a normative document can be understood in a very broad sense, ranging from internal policies to best practice policies, to statutory acts). We also present a classification of normal requirements based on the notion of different types of obligations and the effects of violating these obligations.
Resumo:
It is widely acknowledged that effective asset management requires an interdisciplinary approach, in which synergies should exist between traditional disciplines such as: accounting, engineering, finance, humanities, logistics, and information systems technologies. Asset management is also an important, yet complex business practice. Business process modelling is proposed as an approach to manage the complexity of asset management through the modelling of asset management processes. A sound foundation for the systematic application and analysis of business process modelling in asset management is, however, yet to be developed. Fundamentally, a business process consists of activities (termed functions), events/states, and control flow logic. As both events/states and control flow logic are somewhat dependent on the functions themselves, it is a logical step to first identify the functions within a process. This research addresses the current gap in knowledge by developing a method to identify functions common to various industry types (termed core functions). This lays the foundation to extract such functions, so as to identify both commonalities and variation points in asset management processes. This method describes the use of a manual text mining and a taxonomy approach. An example is presented.
Resumo:
Recently, a new approach for structuring acyclic process models has been introduced. The algorithm is based on a transformation between the Refined Process Structure Tree (RPST) of a control flow graph and the Modular Decomposition Tree (MDT) of ordering relations. In this paper, an extension of the algorithm is presented that allows to partially structure process models in the case when a process model cannot be structured completely. We distinguish four different types of unstructuredness of process models and show that only two are possible in practice. For one of these two types of unstructuredness an algorithm is proposed that returns the maximally structured representation of a process model.
Resumo:
Process models define allowed process execution scenarios. The models are usually depicted as directed graphs, with gateway nodes regulating the control flow routing logic and with edges specifying the execution order constraints between tasks. While arbitrarily structured control flow patterns in process models complicate model analysis, they also permit creativity and full expressiveness when capturing non-trivial process scenarios. This paper gives a classification of arbitrarily structured process models based on the hierarchical process model decomposition technique. We identify a structural class of models consisting of block structured patterns which, when combined, define complex execution scenarios spanning across the individual patterns. We show that complex behavior can be localized by examining structural relations of loops in hidden unstructured regions of control flow. The correctness of the behavior of process models within these regions can be validated in linear time. These observations allow us to suggest techniques for transforming hidden unstructured regions into block-structured ones.
Resumo:
Process models are usually depicted as directed graphs, with nodes representing activities and directed edges control flow. While structured processes with pre-defined control flow have been studied in detail, flexible processes including ad-hoc activities need further investigation. This paper presents flexible process graph, a novel approach to model processes in the context of dynamic environment and adaptive process participants’ behavior. The approach allows defining execution constraints, which are more restrictive than traditional ad-hoc processes and less restrictive than traditional control flow, thereby balancing structured control flow with unstructured ad-hoc activities. Flexible process graph focuses on what can be done to perform a process. Process participants’ routing decisions are based on the current process state. As a formal grounding, the approach uses hypergraphs, where each edge can associate any number of nodes. Hypergraphs are used to define execution semantics of processes formally. We provide a process scenario to motivate and illustrate the approach.
Resumo:
Businesses document their operational processes as process models. The common practice is to represent process models as directed graphs. The nodes of a process graph represent activities and directed edges constitute activity ordering constraints. A flexible process graph modeling approach proposes to generalize process graph structure to a hypergraph. Obtained process structure aims at formalization of ad-hoc process control flow. In this paper we discuss aspects relevant to concurrent execution of process activities in a collaborative manner organized as a flexible process graph. We provide a real world flexible process scenario to illustrate the approach.
Resumo:
Purpose – Context-awareness has emerged as an important principle in the design of flexible business processes. The goal of the research is to develop an approach to extend context-aware business process modeling toward location-awareness. The purpose of this paper is to identify and conceptualize location-dependencies in process modeling. Design/methodology/approach – This paper uses a pattern-based approach to identify location-dependency in process models. The authors design specifications for these patterns. The authors present illustrative examples and evaluate the identified patterns through a literature review of published process cases. Findings – This paper introduces location-awareness as a new perspective to extend context-awareness in BPM research, by introducing relevant location concepts such as location-awareness and location-dependencies. The authors identify five basic location-dependent control-flow patterns that can be captured in process models. And the authors identify location-dependencies in several existing case studies of business processes. Research limitations/implications – The authors focus exclusively on the control-flow perspective of process models. Further work needs to extend the research to address location-dependencies in process data or resources. Further empirical work is needed to explore determinants and consequences of the modeling of location-dependencies. Originality/value – As existing literature mostly focusses on the broad context of business process, location in process modeling still is treated as “second class citizen” in theory and in practice. This paper discusses the vital role of location-dependencies within business processes. The proposed five basic location-dependent control-flow patterns are novel and useful to explain location-dependency in business process models. They provide a conceptual basis for further exploration of location-awareness in the management of business processes.
Resumo:
Business Process Management describes a holistic management approach for the systematic design, modeling, execution, validation, monitoring and improvement of organizational business processes. Traditionally, most attention within this community has been given to control-flow aspects, i.e., the ordering and sequencing of business activities, oftentimes in isolation with regards to the context in which these activities occur. In this paper, we propose an approach that allows executable process models to be integrated with Geographic Information Systems. This approach enables process models to take geospatial and other geographic aspects into account in an explicit manner both during the modeling phase and the execution phase. We contribute a structured modeling methodology, based on the well-known Business Process Model and Notation standard, which is formalized by means of a mapping to executable Colored Petri nets. We illustrate the feasibility of our approach by means of a sustainability-focused case example of a process with important ecological concerns.
Resumo:
As a result of the more distributed nature of organisations and the inherently increasing complexity of their business processes, a significant effort is required for the specification and verification of those processes. The composition of the activities into a business process that accomplishes a specific organisational goal has primarily been a manual task. Automated planning is a branch of artificial intelligence (AI) in which activities are selected and organised by anticipating their expected outcomes with the aim of achieving some goal. As such, automated planning would seem to be a natural fit to the BPM domain to automate the specification of control flow. A number of attempts have been made to apply automated planning to the business process and service composition domain in different stages of the BPM lifecycle. However, a unified adoption of these techniques throughout the BPM lifecycle is missing. As such, we propose a new intention-centric BPM paradigm, which aims on minimising the specification effort by exploiting automated planning techniques to achieve a pre-stated goal. This paper provides a vision on the future possibilities of enhancing BPM using automated planning. A research agenda is presented, which provides an overview of the opportunities and challenges for the exploitation of automated planning in BPM.
Resumo:
In this paper we illustrate a set of features of the Apromore process model repository for analyzing business process variants. Two types of analysis are provided: one is static and based on differences on the process control flow, the other is dynamic and based on differences in the process behavior between the variants. These features combine techniques for the management of large process model collections with those for mining process knowledge from process execution logs. The tool demonstration will be useful for researchers and practitioners working on large process model collections and process execution logs, and specifically for those with an interest in understanding, managing and consolidating business process variants both within and across organizational boundaries.
Resumo:
Better operational control of water networks can help reduce leakage, maintain pressure, and control flow. Proportional integral derivative (PID) controllers, with proper fine-tuning, can help water utility operators achieve targets faster without creating undue transients. The authors compared three tuning methods, in different test situations, involving flow and level control to different reservoirs. Although target values were reached with all three tuning methods, the methods’ performances varied significantly. The lowest performer among the three was the method most widely used in the industry—standard tuning by the Ziegler-Nichols method. Achieving better results was offline tuning by genetic algorithms. Achieving the best control, though, was a fuzzy logic–based online tuning approach—the FZPID controller. The FZPID controller had fewer overshoots and took significantly less time to tune the gains for each problem. This new tuning approach for PID controllers can be applied to a variety of problems and can increase the performance of water networks of any size and structure
Resumo:
Donor-doped n-BaTiO3 polycrystalline ceramics show a strong negative temperature coefficient of resistivity below the orthorhombic-rhombohedral phase transition point, from 10(2-3) Omega cm af 190 K to 10(10-13) Omega cm at less than or similar to 50 K, with thermal coefficient of resistance alpha = 20-23% K-1. Stable thermal sensors for low-temperature applications are realized therefrom. The negative temperature coefficient of resistivity region can be modified by substituting isovalent ions in the lattice. Highly nonlinear current-voltage (I-V) curves are observed at low temperatures, with a voltage maximum followed by the negative differential resistance. The I-V curves are sensitive to dissipation so that cryogenic sensors can be fabricated for liquid level control, flow rate monitoring, radiation detection or in-rush voltage limitation.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
Transaction processing is a key constituent of the IT workload of commercial enterprises (e.g., banks, insurance companies). Even today, in many large enterprises, transaction processing is done by legacy "batch" applications, which run offline and process accumulated transactions. Developers acknowledge the presence of multiple loosely coupled pieces of functionality within individual applications. Identifying such pieces of functionality (which we call "services") is desirable for the maintenance and evolution of these legacy applications. This is a hard problem, which enterprises grapple with, and one without satisfactory automated solutions. In this paper, we propose a novel static-analysis-based solution to the problem of identifying services within transaction-processing programs. We provide a formal characterization of services in terms of control-flow and data-flow properties, which is well-suited to the idioms commonly exhibited by business applications. Our technique combines program slicing with the detection of conditional code regions to identify services in accordance with our characterization. A preliminary evaluation, based on a manual analysis of three real business programs, indicates that our approach can be effective in identifying useful services from batch applications.