970 resultados para Automated process
Resumo:
The development of self-adaptive software (SaS) has specific characteristics compared to traditional one, since it allows that changes to be incorporated at runtime. Automated processes have been used as a feasible solution to conduct the software adaptation at runtime. In parallel, reference model has been used to aggregate knowledge and architectural artifacts, since capture the systems essence of specific domains. However, there is currently no reference model based on reflection for the development of SaS. Thus, the main contribution of this paper is to present a reference model based on reflection for development of SaS that have a need to adapt at runtime. To present the applicability of this model, a case study was conducted and good perspective to efficiently contribute to the area of SaS has been obtained.
Resumo:
Automated process discovery techniques aim at extracting models from information system logs in order to shed light into the business processes supported by these systems. Existing techniques in this space are effective when applied to relatively small or regular logs, but otherwise generate large and spaghetti-like models. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. The result is a collection of process models -- each one representing a variant of the business process -- as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically by means of subprocess extraction. The proposed technique allows users to set a desired bound for the complexity of the produced models. Experiments on real-life logs show that the technique produces collections of models that are up to 64% smaller than those extracted under the same complexity bounds by applying existing trace clustering techniques.
Resumo:
Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.
Resumo:
Existing techniques for automated discovery of process models from event logs gen- erally produce flat process models. Thus, they fail to exploit the notion of subprocess as well as error handling and repetition constructs provided by contemporary process modeling notations, such as the Business Process Model and Notation (BPMN). This paper presents a technique for automated discovery of hierarchical BPMN models con- taining interrupting and non-interrupting boundary events and activity markers. The technique employs functional and inclusion dependency discovery techniques in order to elicit a process-subprocess hierarchy from the event log. Given this hierarchy and the projected logs associated to each node in the hierarchy, parent process and subprocess models are then discovered using existing techniques for flat process model discovery. Finally, the resulting models and logs are heuristically analyzed in order to identify boundary events and markers. By employing approximate dependency discovery tech- niques, it is possible to filter out noise in the event log arising for example from data entry errors or missing events. A validation with one synthetic and two real-life logs shows that process models derived by the proposed technique are more accurate and less complex than those derived with flat process discovery techniques. Meanwhile, a validation on a family of synthetically generated logs shows that the technique is resilient to varying levels of noise.
Resumo:
This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic technique that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.
Resumo:
Process-aware information systems (PAISs) can be configured using a reference process model, which is typically obtained via expert interviews. Over time, however, contextual factors and system requirements may cause the operational process to start deviating from this reference model. While a reference model should ideally be updated to remain aligned with such changes, this is a costly and often neglected activity. We present a new process mining technique that automatically improves the reference model on the basis of the observed behavior as recorded in the event logs of a PAIS. We discuss how to balance the four basic quality dimensions for process mining (fitness, precision, simplicity and generalization) and a new dimension, namely the structural similarity between the reference model and the discovered model. We demonstrate the applicability of this technique using a real-life scenario from a Dutch municipality.
Resumo:
This thesis presents novel techniques for addressing the problems of continuous change and inconsistencies in large process model collections. The developed techniques treat process models as a collection of fragments and facilitate version control, standardization and automated process model discovery using fragment-based concepts. Experimental results show that the presented techniques are beneficial in consolidating large process model collections, specifically when there is a high degree of redundancy.
Resumo:
Many organizations realize that increasing amounts of data (“Big Data”) need to be dealt with intelligently in order to compete with other organizations in terms of efficiency, speed and services. The goal is not to collect as much data as possible, but to turn event data into valuable insights that can be used to improve business processes. However, data-oriented analysis approaches fail to relate event data to process models. At the same time, large organizations are generating piles of process models that are disconnected from the real processes and information systems. In this chapter we propose to manage large collections of process models and event data in an integrated manner. Observed and modeled behavior need to be continuously compared and aligned. This results in a “liquid” business process model collection, i.e. a collection of process models that is in sync with the actual organizational behavior. The collection should self-adapt to evolving organizational behavior and incorporate relevant execution data (e.g. process performance and resource utilization) extracted from the logs, thereby allowing insightful reports to be produced from factual organizational data.
Resumo:
This research investigates the prevalence of sports-related terms among the Web sites of the world’s leading companies, the Fortune Global 500. An automated process copied about four gigabytes of textual data, around 70 million words, from their sites. The subsequent analysis revealed regional and industry differences in the distribution of sports-related terms, the popularity of tennis stars and few references to sports stars, especially in Asia.
Resumo:
This research investigates how the world’s leading companies, the Fortune Global 500, use sportsrelated terms and phrases on their Web site. An automated process mirrored leading transnational corporations’ Web presence and then searched their sites. Analysis of about four gigabytes of Webbased text revealed regional and industry differences in how the world’s largest corporations use sports terms on their sites.
Resumo:
Public key cryptography, and with it,the ability to compute digital signatures, have made it possible for electronic commerce to flourish. It is thus unsurprising that the proposed Australian NECS will also utilise digital signatures in its system so as to provide a fully automated process from the creation of electronic land title instrument to the digital signing, and electronic lodgment of these instruments. This necessitates an analysis of the fraud risks raised by the usage of digital signatures because a compromise of the integrity of digital signatures will lead to a compromise of the Torrens system itself. This article will show that digital signatures may in fact offer greater security against fraud than handwritten signatures; but to achieve this, digital signatures require an infrastructure whereby each component is properly implemented and managed.
Resumo:
Computer forensics is the process of gathering and analysing evidence from computer systems to aid in the investigation of a crime. Typically, such investigations are undertaken by human forensic examiners using purpose-built software to discover evidence from a computer disk. This process is a manual one, and the time it takes for a forensic examiner to conduct such an investigation is proportional to the storage capacity of the computer's disk drives. The heterogeneity and complexity of various data formats stored on modern computer systems compounds the problems posed by the sheer volume of data. The decision to undertake a computer forensic examination of a computer system is a decision to commit significant quantities of a human examiner's time. Where there is no prior knowledge of the information contained on a computer system, this commitment of time and energy occurs with little idea of the potential benefit to the investigation. The key contribution of this research is the design and development of an automated process to describe a computer system and its activity for the purposes of a computer forensic investigation. The term proposed for this process is computer profiling. A model of a computer system and its activity has been developed over the course of this research. Using this model a computer system, which is the subj ect of investigation, can be automatically described in terms useful to a forensic investigator. The computer profiling process IS resilient to attempts to disguise malicious computer activity. This resilience is achieved by detecting inconsistencies in the information used to infer the apparent activity of the computer. The practicality of the computer profiling process has been demonstrated by a proof-of concept software implementation. The model and the prototype implementation utilising the model were tested with data from real computer systems. The resilience of the process to attempts to disguise malicious activity has also been demonstrated with practical experiments conducted with the same prototype software implementation.
Resumo:
A substantial body of literature exists identifying factors contributing to under-performing Enterprise Resource Planning systems (ERPs), including poor communication, lack of executive support and user dissatisfaction (Calisir et al., 2009). Of particular interest is Momoh et al.’s (2010) recent review identifying poor data quality (DQ) as one of nine critical factors associated with ERP failure. DQ is central to ERP operating processes, ERP facilitated decision-making and inter-organizational cooperation (Batini et al., 2009). Crucial in ERP contexts is that the integrated, automated, process driven nature of ERP data flows can amplify DQ issues, compounding minor errors as they flow through the system (Haug et al., 2009; Xu et al., 2002). However, the growing appreciation of the importance of DQ in determining ERP success lacks research addressing the relationship between stakeholders’ requirements and perceptions of ERP DQ, perceived data utility and the impact of users’ treatment of data on ERP outcomes.