132 resultados para Malpighi, Marcello, 1628-1694


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of entrepreneurship has developed during the past decades and has a long history in the business sector. Miller et al. (2009) refer that entrepreneurship is an important part of the economic scenery, providing opportunities and jobs for substantial numbers of people. Audresch et al. (2002) clarify how the positive and statistically robust link between entrepreneurship and economic growth has been indisputably verified across a wide spectrum of units and observation, spanning the establishment, the enterprise, the industry, the region and the country. In the literature there has been an evolution and intense debate about the role of entrepreneurship as a field of research and about the creation of a conceptual framework for the entrepreneurship field as a whole. Shane and Venkataraman (2000) define the field of entrepreneurship as the scholarly examination of how, by whom, and with what effects opportunities to create future goods and services are discovered, evaluated, and exploited. For this reason the field involves the study of sources of opportunities; the processes of discovery, evaluation, and exploitation of opportunities; and the set of individuals who discover, evaluate, and exploit them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing techniques for automated discovery of process models from event logs largely focus on extracting flat process models. In other words, they fail to exploit the notion of subprocess, as well as structured error handling and repetition constructs provided by contemporary process modeling notations, such as the Business Process Model and Notation (BPMN). This paper presents a technique for automated discovery of BPMN models containing subprocesses, interrupting and non-interrupting boundary events, and loop and multi-instance markers. The technique analyzes dependencies between data attributes associated with events, in order to identify subprocesses and to extract their associated logs. Parent process and subprocess models are then discovered separately using existing techniques for flat process model discovery. Finally, the resulting models and logs are heuristically analyzed in order to identify boundary events and markers. A validation with one synthetic and two real-life logs shows that process models derived using the proposed technique are more accurate and less complex than those derived with flat process model discovery techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An approach is proposed and applied to five industries to prove how phenomenology can be valuable in rethinking consumer markets (Popp & Holt, 2013). The purpose of this essay is to highlight the potential implications that 'phenomenological thinking' brings for competitiveness and innovation (Sanders, 1982), hence helping managers being more innovative in their strategic marketing decisions (i.e. market creation, positioning, branding). Phenomenology is in fact a way of thinking − besides and before being a qualitative research procedure − a very practical exercise that strategic managers can master and apply in the same successful way as other scientists have already done in their fields of study (e.g. sociology, psychology, psychiatry, and anthropology). Two fundamental considerations justify this research: a lack of distinctiveness among firms due to high levels of competition and consumers no longer knowing what they want (i.e. no more needs). The authors will show how the classical mental framework generally used to study markets by practitioners appears on the one hand to be established and systematic in the life of a company, while on the other is no longer adequate to meet the needs of innovation required to survive. To the classic principles of objectivity, generality, and psycho-sociology the authors counterpose the imaginary, eidetic-phenomenological reduction, and an existential perspective. From a theoretical point of view, this paper introduces a set of functioning rules applicable to achieve innovation in any market and useful to identify cultural practices inherent in the act of consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper evaluates the suitability of sequence classification techniques for analyzing deviant business process executions based on event logs. Deviant process executions are those that deviate in a negative or positive way with respect to normative or desirable outcomes, such as non-compliant executions or executions that undershoot or exceed performance targets. We evaluate a range of feature types and classification methods in terms of their ability to accurately discriminate between normal and deviant executions both when deviances are infrequent (unbalanced) and when deviances are as frequent as normal executions (balanced). We also analyze the ability of the discovered rules to explain potential causes and contributing factors of observed deviances. The evaluation results show that feature types extracted using pattern mining techniques only slightly outperform those based on individual activity frequency. The results also suggest that more complex feature types ought to be explored to achieve higher levels of accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interest in poverty and the moral sense of'helping the poor' are a constant topic in Western culture (Mayo 2009).ln recent years, multinational corporations (MNCs) have evolved in their understanding of how social issues, such as poverty alleviation, relate to their fundamental purposes. From a business strategy point of view, 'socially responsible' initiatives are generally born with lhe dual purpose of attaining social visibility (i.e. marketing) and increasing economic returns. Besides addressing social challenges as part of their corporate social responsibility strategies, MNCs have also begun 'selling to the poor' in emerging markets (Prahalad 2004). A few forward -looking companies consider tltis base of the pyramid (BOP) market also as a source of innovation and have started to co-create with consumers (Simanis and Hart 2008).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Collisions between different types of road users at intersections form a substantial component of the road toll. This paper presents an analysis of driver, cyclist, motorcyclist and pedestrian behaviour at intersections that involved the application of an integrated suite of ergonomics methods, the Event Analysis of Systemic Teamwork (EAST) framework, to on-road study data. EAST was used to analyse behaviour at three intersections using data derived from an on-road study of driver, cyclist, motorcyclist and pedestrian behaviour. The analysis shows the differences in behaviour and cognition across the different road user groups and pinpoints instances where this may be creating conflicts between different road users. The role of intersection design in creating these differences in behaviour and resulting conflicts is discussed. It is concluded that currently intersections are not designed in a way that supports behaviour across the four forms of road user studied. Interventions designed to improve intersection safety are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Empirical evidence shows that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication arises for example when the repository covers multiple variants of the same processes or due to copy-pasting. Previous work has addressed the problem of efficiently retrieving exact clones that can be refactored into shared subprocess models. This article studies the broader problem of approximate clone detection in process models. The article proposes techniques for detecting clusters of approximate clones based on two well-known clustering algorithms: DBSCAN and Hi- erarchical Agglomerative Clustering (HAC). The article also defines a measure of standardizability of an approximate clone cluster, meaning the potential benefit of replacing the approximate clones with a single standardized subprocess. Experiments show that both techniques, in conjunction with the proposed standardizability measure, accurately retrieve clusters of approximate clones that originate from copy-pasting followed by independent modifications to the copied fragments. Additional experiments show that both techniques produce clusters that match those produced by human subjects and that are perceived to be standardizable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a recommendation system that supports process participants in taking risk-informed decisions, with the goal of reducing risks that may arise during process execution. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a business process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we suggest to the participant the action to perform which minimizes the predicted process risk. Risks are predicted by traversing decision trees generated from the logs of past process executions, which consider process data, involved resources, task durations and other information elements like task frequencies. When applied in the context of multiple process instances running concurrently, a second technique is employed that uses integer linear programming to compute the optimal assignment of resources to tasks to be performed, in order to deal with the interplay between risks relative to different instances. The recommendation system has been implemented as a set of components on top of the YAWL BPM system and its effectiveness has been evaluated using a real-life scenario, in collaboration with risk analysts of a large insurance company. The results, based on a simulation of the real-life scenario and its comparison with the event data provided by the company, show that the process instances executed concurrently complete with significantly fewer faults and with lower fault severities, when the recommendations provided by our recommendation system are taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a technique for the automated removal of noise from process execution logs. Noise is the result of data quality issues such as logging errors and manifests itself in the form of infrequent process behavior. The proposed technique generates an abstract representation of an event log as an automaton capturing the direct follows relations between event labels. This automaton is then pruned from arcs with low relative frequency and used to remove from the log those events not fitting the automaton, which are identified as outliers. The technique has been extensively evaluated on top of various auto- mated process discovery algorithms using both artificial logs with different levels of noise, as well as a variety of real-life logs. The results show that the technique significantly improves the quality of the discovered process model along fitness, appropriateness and simplicity, without negative effects on generalization. Further, the technique scales well to large and complex logs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Business processes are prone to continuous and unexpected changes. Process workers may start executing a process differently in order to adjust to changes in workload, season, guidelines or regulations for example. Early detection of business process changes based on their event logs – also known as business process drift detection – enables analysts to identify and act upon changes that may otherwise affect process performance. Previous methods for business process drift detection are based on an exploration of a potentially large feature space and in some cases they require users to manually identify the specific features that characterize the drift. Depending on the explored feature set, these methods may miss certain types of changes. This paper proposes a fully automated and statistically grounded method for detecting process drift. The core idea is to perform statistical tests over the distributions of runs observed in two consecutive time windows. By adaptively sizing the window, the method strikes a trade-off between classification accuracy and drift detection delay. A validation on synthetic and real-life logs shows that the method accurately detects typical change patterns and scales up to the extent it is applicable for online drift detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of identifying and explaining behavioral differences between two business process event logs. The paper presents a method that, given two event logs, returns a set of statements in natural language capturing behavior that is present or frequent in one log, while absent or infrequent in the other. This log delta analysis method allows users to diagnose differences between normal and deviant executions of a process or between two versions or variants of a process. The method relies on a novel approach to losslessly encode an event log as an event structure, combined with a frequency-enhanced technique for differencing pairs of event structures. A validation of the proposed method shows that it accurately diagnoses typical change patterns and can explain differences between normal and deviant cases in a real-life log, more compactly and precisely than previously proposed methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many organizations realize that increasing amounts of data (“Big Data”) need to be dealt with intelligently in order to compete with other organizations in terms of efficiency, speed and services. The goal is not to collect as much data as possible, but to turn event data into valuable insights that can be used to improve business processes. However, data-oriented analysis approaches fail to relate event data to process models. At the same time, large organizations are generating piles of process models that are disconnected from the real processes and information systems. In this chapter we propose to manage large collections of process models and event data in an integrated manner. Observed and modeled behavior need to be continuously compared and aligned. This results in a “liquid” business process model collection, i.e. a collection of process models that is in sync with the actual organizational behavior. The collection should self-adapt to evolving organizational behavior and incorporate relevant execution data (e.g. process performance and resource utilization) extracted from the logs, thereby allowing insightful reports to be produced from factual organizational data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Business Process Management (BPM) (Dumas et al. 2013) investigates how organizations function and can be improved on the basis of their business processes. The starting point for BPM is that organizational performance is a function of process performance. Thus, BPM proposes a set of methods, techniques and tools to discover, analyze, implement, monitor and control business processes, with the ultimate goal of improving these processes. Most importantly, BPM is not just an organizational management discipline. BPM also studies how technology, and particularly information technology, can effectively support the process improvement effort. In the past two decades the field of BPM has been the focus of extensive research, which spans an increasingly growing scope and advances technology in various directions. The main international forum for state-of-the-art research in this field is the International Conference on Business Process Management, or “BPM” for short—an annual meeting of the aca ...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing techniques for automated discovery of process models from event logs gen- erally produce flat process models. Thus, they fail to exploit the notion of subprocess as well as error handling and repetition constructs provided by contemporary process modeling notations, such as the Business Process Model and Notation (BPMN). This paper presents a technique for automated discovery of hierarchical BPMN models con- taining interrupting and non-interrupting boundary events and activity markers. The technique employs functional and inclusion dependency discovery techniques in order to elicit a process-subprocess hierarchy from the event log. Given this hierarchy and the projected logs associated to each node in the hierarchy, parent process and subprocess models are then discovered using existing techniques for flat process model discovery. Finally, the resulting models and logs are heuristically analyzed in order to identify boundary events and markers. By employing approximate dependency discovery tech- niques, it is possible to filter out noise in the event log arising for example from data entry errors or missing events. A validation with one synthetic and two real-life logs shows that process models derived by the proposed technique are more accurate and less complex than those derived with flat process discovery techniques. Meanwhile, a validation on a family of synthetically generated logs shows that the technique is resilient to varying levels of noise.