974 resultados para Markov Renewal Process
Resumo:
Purpose – Context-awareness has emerged as an important principle in the design of flexible business processes. The goal of the research is to develop an approach to extend context-aware business process modeling toward location-awareness. The purpose of this paper is to identify and conceptualize location-dependencies in process modeling. Design/methodology/approach – This paper uses a pattern-based approach to identify location-dependency in process models. The authors design specifications for these patterns. The authors present illustrative examples and evaluate the identified patterns through a literature review of published process cases. Findings – This paper introduces location-awareness as a new perspective to extend context-awareness in BPM research, by introducing relevant location concepts such as location-awareness and location-dependencies. The authors identify five basic location-dependent control-flow patterns that can be captured in process models. And the authors identify location-dependencies in several existing case studies of business processes. Research limitations/implications – The authors focus exclusively on the control-flow perspective of process models. Further work needs to extend the research to address location-dependencies in process data or resources. Further empirical work is needed to explore determinants and consequences of the modeling of location-dependencies. Originality/value – As existing literature mostly focusses on the broad context of business process, location in process modeling still is treated as “second class citizen” in theory and in practice. This paper discusses the vital role of location-dependencies within business processes. The proposed five basic location-dependent control-flow patterns are novel and useful to explain location-dependency in business process models. They provide a conceptual basis for further exploration of location-awareness in the management of business processes.
Resumo:
Empirical evidence shows that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication arises for example when the repository covers multiple variants of the same processes or due to copy-pasting. Previous work has addressed the problem of efficiently retrieving exact clones that can be refactored into shared subprocess models. This article studies the broader problem of approximate clone detection in process models. The article proposes techniques for detecting clusters of approximate clones based on two well-known clustering algorithms: DBSCAN and Hi- erarchical Agglomerative Clustering (HAC). The article also defines a measure of standardizability of an approximate clone cluster, meaning the potential benefit of replacing the approximate clones with a single standardized subprocess. Experiments show that both techniques, in conjunction with the proposed standardizability measure, accurately retrieve clusters of approximate clones that originate from copy-pasting followed by independent modifications to the copied fragments. Additional experiments show that both techniques produce clusters that match those produced by human subjects and that are perceived to be standardizable.
Resumo:
This paper develops maximum likelihood (ML) estimation schemes for finite-state semi-Markov chains in white Gaussian noise. We assume that the semi-Markov chain is characterised by transition probabilities of known parametric from with unknown parameters. We reformulate this hidden semi-Markov model (HSM) problem in the scalar case as a two-vector homogeneous hidden Markov model (HMM) problem in which the state consist of the signal augmented by the time to last transition. With this reformulation we apply the expectation Maximumisation (EM ) algorithm to obtain ML estimates of the transition probabilities parameters, Markov state levels and noise variance. To demonstrate our proposed schemes, motivated by neuro-biological applications, we use a damped sinusoidal parameterised function for the transition probabilities.
Resumo:
In this paper we propose and study low complexity algorithms for on-line estimation of hidden Markov model (HMM) parameters. The estimates approach the true model parameters as the measurement noise approaches zero, but otherwise give improved estimates, albeit with bias. On a nite data set in the high noise case, the bias may not be signi cantly more severe than for a higher complexity asymptotically optimal scheme. Our algorithms require O(N3) calculations per time instant, where N is the number of states. Previous algorithms based on earlier hidden Markov model signal processing methods, including the expectation-maximumisation (EM) algorithm require O(N4) calculations per time instant.
Resumo:
In this chapter we describe a critical fairytales unit taught to 4.5 to 5.5 year olds in a context of intensifying pressure to raise literacy achievement. The unit was infused with lessons on reinterpreted fairytales followed by process drama activities built around a sophisticated picture book, Beware of the Bears (MacDonald, 2004). The latter entailed a text analytic approach to critical literacy derived from systemic functional linguistics (Halliday, 1978; Halliday & Matthiessen, 2004). This approach provides a way of analysing how words and discourse are used to represent the world in a particular way and shape reader relations with the author in a particular field (Janks, 2010).
Resumo:
Process models describe someone’s understanding of processes. Processes can be described using unstructured, semi-formal or diagrammatic representation forms. These representations are used in a variety of task settings, ranging from understanding processes to executing or improving processes, with the implicit assumption that the chosen representation form will be appropriate for all task settings. We explore the validity of this assumption by examining empirically the preference for different process representation forms depending on the task setting and cognitive style of the user. Based on data collected from 120 business school students, we show that preferences for process representation formats vary dependent on application purpose and cognitive styles of the participants. However, users consistently prefer diagrams over other representation formats. Our research informs a broader research agenda on task-specific applications of process modeling. We offer several recommendations for further research in this area.
Resumo:
To effectively manage the challenges being faced by construction organisations in a fast changing business environment, many organisations are attempting to integrate knowledge management (KM) into their business operations. KM activities interact with each other and form a process which receives input from its internal business environment and produces outputs that should be justified by its business performance. This paper aims to provide further understanding on the dynamic nature of the KM process. Through a combination of path analysis and system dynamic simulation, this study found that: 1) an improved business performance enables active KM activities and provide feedback and guidance for formulating learning-based policies; and 2) effective human resource recruitment policies can enlarge the pool of individual knowledge, which lead to a more conducive internal business environment, as well as a higher KM activity level. Consequently, the desired business performance level can be reached within a shorter time frame.
Resumo:
Today’s information systems log vast amounts of data. These collections of data (implicitly) describe events (e.g. placing an order or taking a blood test) and, hence, provide information on the actual execution of business processes. The analysis of such data provides an excellent starting point for business process improvement. This is the realm of process mining, an area which has provided a repertoire of many analysis techniques. Despite the impressive capabilities of existing process mining algorithms, dealing with the abundance of data recorded by contemporary systems and devices remains a challenge. Of particular importance is the capability to guide the meaningful interpretation of “oceans of data” by process analysts. To this end, insights from the field of visual analytics can be leveraged. This article proposes an approach where process states are reconstructed from event logs and visualised in succession, leading to an animated history of a process. This approach is customisable in how a process state, partially defined through a collection of activity instances, is visualised: one can select a map and specify a projection of events on this map based on the properties of the events. This paper describes a comprehensive implementation of the proposal. It was realised using the open-source process mining framework ProM. Moreover, this paper also reports on an evaluation of the approach conducted with Suncorp, one of Australia’s largest insurance companies.
Resumo:
In 2007 some of us were fortunate enough to be in Dundee for the Royal College of Nursing’s Annual International Nursing Research Conference. A highlight of that conference was an enactment of the process and context debate. The chair asked for volunteers and various members of the audience came forward giving the impression that they were nurses and that it was a chance selection. The audience accepted these individuals as their representatives and once they had gathered on stage we all expected the debate to begin. But the large number of researchers in the audience gave little thought to the selection and recruitment process they had just witnessed. Then the selected representatives stood up and sang A cappella. Suddenly the context was different and we questioned the process. The point was made: process or context, or both?
Resumo:
In-memory databases have become a mainstay of enterprise computing offering significant performance and scalability boosts for online analytical and (to a lesser extent) transactional processing as well as improved prospects for integration across different applications through an efficient shared database layer. Significant research and development has been undertaken over several years concerning data management considerations of in-memory databases. However, limited insights are available on the impacts of applications and their supportive middleware platforms and how they need to evolve to fully function through, and leverage, in-memory database capabilities. This paper provides a first, comprehensive exposition into how in-memory databases impact Business Pro- cess Management, as a mission-critical and exemplary model-driven integration and orchestration middleware. Through it, we argue that in-memory databases will render some prevalent uses of legacy BPM middleware obsolete, but also open up exciting possibilities for tighter application integration, better process automation performance and some entirely new BPM capabilities such as process-based application customization. To validate the feasibility of an in-memory BPM, we develop a surprisingly simple BPM runtime embedded into SAP HANA and providing for BPMN-based process automation capabilities.
Resumo:
Accurate process model elicitation continues to be a time consuming task, requiring skill on the part of the interviewer to extract explicit and tacit process information from the interviewee. Many errors occur in this elicitation stage that would be avoided by better activity recall, more consistent specification methods and greater engagement in the elicitation process by interviewees. Metasonic GmbH has developed a process elicitation tool for their process suite. As part of a research engagement with Metasonic, staff from QUT, Australia have developed a 3D virtual world approach to the same problem, viz. eliciting process models from stakeholders in an intuitive manner. This book chapter tells the story of how QUT staff developed a 3D Virtual World tool for process elicitation, took the outcomes of their research project to Metasonic for evaluation, and finally, Metasonic’s response to the initial proof of concept.
Resumo:
This chapter sets out to identify patterns at play in boardroom discussions around the design and adoption of an accountability system in a nonprofit organisation. To this end, it contributes to the scarce literature showing the backstage of management accounting systems (Berry, 2005), investment policy determining (Kreander, Beattie & McPhail, 2009; Kreander, McPhail & Molyneaux, 2004) and financial planning strategizing (Parker, 2004) or budgeting (Irvine 2005). The paucity of publications is due to issues raised by confidentiality preventing attendance at those meetings (Irvine, 2003), Irvine & Gaffikin, 2006). However, often, the implementation of a new control technology occurs over a long period of time that might exceed the duration of a research project (Quattrone & Hopper, 2001, 2005). Recent trends consisting of having research funded by grants from private institutions or charities have tended to reduce the length of such undertakings to a few months or rarely more than a couple of years (Parker, 2013);
Resumo:
Fashion Thinking: Creative Approaches to the Design Process, F. Dieffenbacher (2013) London: AVA, 224 pp., ISBN: 9782940411719, p/bk, $79.99
Resumo:
Identifying appropriate decision criteria and making optimal decisions in a structured way is a complex process. This paper presents an approach for doing this in the form of a hybrid Quality Function Deployment (QFD) and Cybernetic Analytic Network Process (CANP) model for project manager selection. This involves the use of QFD to translate the owner's project management expectations into selection criteria and the CANP to weight the expectations and selection criteria. The supermatrix approach then prioritises the candidates with respect to the overall decision-making goal. A case study is used to demonstrate the use of the model in selecting a renovation project manager. This involves the development of 18 selection criteria in response to the owner's three main expectations of time, cost and quality.