898 resultados para Process Mining, Distributed Computing, Grid Computing, Process Discovery, Conformance Checking, Business Process Management
Resumo:
Il presente elaborato esplora l’attitudine delle organizzazioni nei confronti dei processi di business che le sostengono: dalla semi-assenza di struttura, all’organizzazione funzionale, fino all’avvento del Business Process Reengineering e del Business Process Management, nato come superamento dei limiti e delle problematiche del modello precedente. All’interno del ciclo di vita del BPM, trova spazio la metodologia del process mining, che permette un livello di analisi dei processi a partire dagli event data log, ossia dai dati di registrazione degli eventi, che fanno riferimento a tutte quelle attività supportate da un sistema informativo aziendale. Il process mining può essere visto come naturale ponte che collega le discipline del management basate sui processi (ma non data-driven) e i nuovi sviluppi della business intelligence, capaci di gestire e manipolare l’enorme mole di dati a disposizione delle aziende (ma che non sono process-driven). Nella tesi, i requisiti e le tecnologie che abilitano l’utilizzo della disciplina sono descritti, cosi come le tre tecniche che questa abilita: process discovery, conformance checking e process enhancement. Il process mining è stato utilizzato come strumento principale in un progetto di consulenza da HSPI S.p.A. per conto di un importante cliente italiano, fornitore di piattaforme e di soluzioni IT. Il progetto a cui ho preso parte, descritto all’interno dell’elaborato, ha come scopo quello di sostenere l’organizzazione nel suo piano di improvement delle prestazioni interne e ha permesso di verificare l’applicabilità e i limiti delle tecniche di process mining. Infine, nell’appendice finale, è presente un paper da me realizzato, che raccoglie tutte le applicazioni della disciplina in un contesto di business reale, traendo dati e informazioni da working papers, casi aziendali e da canali diretti. Per la sua validità e completezza, questo documento è stata pubblicato nel sito dell'IEEE Task Force on Process Mining.
Resumo:
Discovering proper search intents is a vi- tal process to return desired results. It is constantly a hot research topic regarding information retrieval in recent years. Existing methods are mainly limited by utilizing context-based mining, query expansion, and user profiling techniques, which are still suffering from the issue of ambiguity in search queries. In this pa- per, we introduce a novel ontology-based approach in terms of a world knowledge base in order to construct personalized ontologies for identifying adequate con- cept levels for matching user search intents. An iter- ative mining algorithm is designed for evaluating po- tential intents level by level until meeting the best re- sult. The propose-to-attempt approach is evaluated in a large volume RCV1 data set, and experimental results indicate a distinct improvement on top precision after compared with baseline models.
Resumo:
In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.
Resumo:
Onboard spacecraft computing system is a case of a functionally distributed system that requires continuous interaction among the nodes to control the operations at different nodes. A simple and reliable protocol is desired for such an application. This paper discusses a formal approach to specify the computing system with respect to some important issues encountered in the design and development of a protocol for the onboard distributed system. The issues considered in this paper are concurrency, exclusiveness and sequencing relationships among the various processes at different nodes. A 6-tuple model is developed for the precise specification of the system. The model also enables us to check the consistency of specification and deadlock caused due to improper specification. An example is given to illustrate the use of the proposed methodology for a typical spacecraft configuration. Although the theory is motivated by a specific application the same may be applied to other distributed computing system such as those encountered in process control industries, power plant control and other similar environments.
Resumo:
Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a result parallel algorithms on high performance machines become one way out. In this study we partition the matching search, which occupies the majority of the work in a fractal video compression process, into small tasks and implement them in two distributed computing environments, one using DCOM and the other using .NET Remoting technology, based on a local area network consists of loosely coupled PCs. Experimental results show that the parallel algorithm is able to achieve a high speedup in these distributed environments.
Resumo:
Field communication systems (fieldbuses) are widely used as the communication support for distributed computer-controlled systems (DCCS) within all sort of process control and manufacturing applications. There are several advantages in the use of fieldbuses as a replacement for the traditional point-to-point links between sensors/actuators and computer-based control systems, within which the most relevant is the decentralisation and distribution of the processing power over the field. A widely used fieldbus is the WorldFIP, which is normalised as European standard EN 50170. Using WorldFIP to support DCCS, an important issue is “how to guarantee the timing requirements of the real-time traffic?” WorldFIP has very interesting mechanisms to schedule data transfers, since it explicitly distinguishes periodic and aperiodic traffic. In this paper, we describe how WorldFIP handles these two types of traffic, and more importantly, we provide a comprehensive analysis on how to guarantee the timing requirements of the real-time traffic.
Resumo:
Distributed Computing frameworks belong to a class of programming models that allow developers to
launch workloads on large clusters of machines. Due to the dramatic increase in the volume of
data gathered by ubiquitous computing devices, data analytic workloads have become a common
case among distributed computing applications, making Data Science an entire field of
Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,
a sequence of operations they wish to apply on this dataset, and some constraint they may have
related to their work (performances, QoS, budget, etc). However, it is actually extremely
difficult, without domain expertise, to perform data science. One need to select the right amount
and type of resources, pick up a framework, and configure it. Also, users are often running their
application in shared environments, ruled by schedulers expecting them to specify precisely their resource
needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and
profiling are hard, high dimensional problems that block users from making the right
configuration choices and determining the right amount of resources they need. Paradoxically, the
system is gathering a large amount of monitoring data at runtime, which remains unused.
In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit
monitoring data to learn about workloads, and process user requests into a tailored execution
context. In this work, we study different techniques that have been used to make steps toward
such system awareness, and explore a new way to do so by implementing machine learning
techniques to recommend a specific subset of system configurations for Apache Spark applications.
Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight
the complexity in choosing the best one for a given workload.
Resumo:
This research studied distributed computing of all-to-all comparison problems with big data sets. The thesis formalised the problem, and developed a high-performance and scalable computing framework with a programming model, data distribution strategies and task scheduling policies to solve the problem. The study considered storage usage, data locality and load balancing for performance improvement in solving the problem. The research outcomes can be applied in bioinformatics, biometrics and data mining and other domains in which all-to-all comparisons are a typical computing pattern.
Resumo:
With the large diffusion of Business Process Managemen (BPM) automation suites, the possibility of managing process-related risks arises. This paper introduces an innovative framework for process-related risk management and describes a working implementation realized by extending the YAWL system. The framework covers three aspects of risk management: risk monitoring, risk prevention, and risk mitigation. Risk monitoring functionality is provided using a sensor-based architecture, where sensors are defined at design time and used at run-time for monitoring purposes. Risk prevention functionality is provided in the form of suggestions about what should be executed, by who, and how, through the use of decision trees. Finally, risk mitigation functionality is provided as a sequence of remedial actions (e.g. reallocating, skipping, rolling back of a work item) that should be executed to restore the process to a normal situation.
Resumo:
Business Process Management (BPM) is rapidly evolving as an established discipline. There are a number of efforts underway to formalize the various aspects of BPM practice; creating a formal Body of Knowledge (BoK) is one such effort. Bodies of knowledge are artifacts that have a proven track record for accelerating the professionalization of various disciplines. In order for this to succeed in BPM, it is vital to involve the broader business process community and derive a BoK that has essential characteristics that addresses the discipline’s needs. We argue for the necessity of a comprehensive BoK for the BPM domain, and present a core list of essential features to consider when developing a BoK based on preliminary empirical evidence. The paper identifies and critiques existing Bodies of Knowledge related to BPM, and firmly calls for an effort to develop a more accurate and sustainable BoK for BPM. An approach for this effort is presented with preliminary outcomes.
Resumo:
Study Approach The results presented in this report are part of a larger global study on the major issues in BPM. Only one part of the larger study is reported here, viz. interviews with BPM experts. Interviews of BPM tool vendors together with focus group studies involving user organizations were conducted in parallel and set the groundwork for the identification of BPM issues on a global scale. Through this multi-method approach, we identify four distinct sets of outcomes. First, as is the focus of this report, we identify the BPM issues as perceived by BPM experts. Second, the research design allows us to gain insight into the opinions of organizations deploying BPM solutions. Third, an understanding of organizations’ misconceptions of BPM technologies, as confronted by BPM tool vendors, is obtained. Last, we seek to gain an understanding of BPM issues on a global scale, together with knowledge of matters of concern. This final outcome is aimed to produce an industry-driven research agenda that will inform practitioners and, in particular, the research community worldwide on issues and challenges that are prevalent or emerging in BPM and related areas...