897 resultados para Analytical hierarchical process
Resumo:
Process mining techniques are able to extract knowledge from event logs commonly available in today’s information systems. These techniques provide new means to discover, monitor, and improve processes in a variety of application domains. There are two main drivers for the growing interest in process mining. On the one hand, more and more events are being recorded, thus, providing detailed information about the history of processes. On the other hand, there is a need to improve and support business processes in competitive and rapidly changing environments. This manifesto is created by the IEEE Task Force on Process Mining and aims to promote the topic of process mining. Moreover, by defining a set of guiding principles and listing important challenges, this manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users. The goal is to increase the maturity of process mining as a new tool to improve the (re)design, control, and support of operational business processes.
Resumo:
The health system is one sector dealing with a deluge of complex data. Many healthcare organisations struggle to utilise these volumes of health data effectively and efficiently. Also, there are many healthcare organisations, which still have stand-alone systems, not integrated for management of information and decision-making. This shows, there is a need for an effective system to capture, collate and distribute this health data. Therefore, implementing the data warehouse concept in healthcare is potentially one of the solutions to integrate health data. Data warehousing has been used to support business intelligence and decision-making in many other sectors such as the engineering, defence and retail sectors. The research problem that is going to be addressed is, "how can data warehousing assist the decision-making process in healthcare". To address this problem the researcher has narrowed an investigation focusing on a cardiac surgery unit. This research used the cardiac surgery unit at the Prince Charles Hospital (TPCH) as the case study. The cardiac surgery unit at TPCH uses a stand-alone database of patient clinical data, which supports clinical audit, service management and research functions. However, much of the time, the interaction between the cardiac surgery unit information system with other units is minimal. There is a limited and basic two-way interaction with other clinical and administrative databases at TPCH which support decision-making processes. The aims of this research are to investigate what decision-making issues are faced by the healthcare professionals with the current information systems and how decision-making might be improved within this healthcare setting by implementing an aligned data warehouse model or models. As a part of the research the researcher will propose and develop a suitable data warehouse prototype based on the cardiac surgery unit needs and integrating the Intensive Care Unit database, Clinical Costing unit database (Transition II) and Quality and Safety unit database [electronic discharge summary (e-DS)]. The goal is to improve the current decision-making processes. The main objectives of this research are to improve access to integrated clinical and financial data, providing potentially better information for decision-making for both improved from the questionnaire and by referring to the literature, the results indicate a centralised data warehouse model for the cardiac surgery unit at this stage. A centralised data warehouse model addresses current needs and can also be upgraded to an enterprise wide warehouse model or federated data warehouse model as discussed in the many consulted publications. The data warehouse prototype was able to be developed using SAS enterprise data integration studio 4.2 and the data was analysed using SAS enterprise edition 4.3. In the final stage, the data warehouse prototype was evaluated by collecting feedback from the end users. This was achieved by using output created from the data warehouse prototype as examples of the data desired and possible in a data warehouse environment. According to the feedback collected from the end users, implementation of a data warehouse was seen to be a useful tool to inform management options, provide a more complete representation of factors related to a decision scenario and potentially reduce information product development time. However, there are many constraints exist in this research. For example the technical issues such as data incompatibilities, integration of the cardiac surgery database and e-DS database servers and also, Queensland Health information restrictions (Queensland Health information related policies, patient data confidentiality and ethics requirements), limited availability of support from IT technical staff and time restrictions. These factors have influenced the process for the warehouse model development, necessitating an incremental approach. This highlights the presence of many practical barriers to data warehousing and integration at the clinical service level. Limitations included the use of a small convenience sample of survey respondents, and a single site case report study design. As mentioned previously, the proposed data warehouse is a prototype and was developed using only four database repositories. Despite this constraint, the research demonstrates that by implementing a data warehouse at the service level, decision-making is supported and data quality issues related to access and availability can be reduced, providing many benefits. Output reports produced from the data warehouse prototype demonstrated usefulness for the improvement of decision-making in the management of clinical services, and quality and safety monitoring for better clinical care. However, in the future, the centralised model selected can be upgraded to an enterprise wide architecture by integrating with additional hospital units’ databases.
Resumo:
In my last Column this year, I want to draw your attention to some current efforts in the space of BPM research and education that try to move BPM thinking forward into new areas of application. I am subsuming these efforts under the notion of x-aware BPM.
Resumo:
Business process modeling as a practice and research field has received great attention in recent years. However, while related artifacts such as models, tools or grammars have substantially matured, comparatively little is known about the activities that are conducted as part of the actual act of process modeling. Especially the key role of the modeling facilitator has not been researched to date. In this paper, we propose a new theory-grounded, conceptual framework describing four facets (the driving engineer, the driving artist, the catalyzing engineer, and the catalyzing artist) that can be used by a facilitator. These facets with behavioral styles have been empirically explored via in-depth interviews and additional questionnaires with experienced process analysts. We develop a proposal for an emerging theory for describing, investigating, and explaining different behaviors associated with Business Process Modeling Facilitation. This theory is an important sensitizing vehicle for examining processes and outcomes from process modeling endeavors.
Resumo:
As business process management technology matures, organisations acquire more and more business process models. The resulting collections can consist of hundreds, even thousands of models and their management poses real challenges. One of these challenges concerns model retrieval where support should be provided for the formulation and efficient execution of business process model queries. As queries based on only structural information cannot deal with all querying requirements in practice, there should be support for queries that require knowledge of process model semantics. In this paper we formally define a process model query language that is based on semantic relationships between tasks. This query language is independent of the particular process modelling notation used, but we will demonstrate how it can be used in the context of Petri nets by showing how the semantic relationships can be determined for these nets in such a way that state space explosion is avoided as much as possible. An experiment with three large process model repositories shows that queries expressed in our language can be evaluated efficiently.
Resumo:
The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.
Resumo:
Technologies and languages for integrated processes are a relatively recent innovation. Over that period many divergent waves of innovation have transformed process integration. Like sockets and distributed objects, early workflow systems ordered programming interfaces that connected the process modelling layer to any middleware. BPM systems emerged later, connecting the modelling world to middleware through components. While BPM systems increased ease of use (modelling convenience), long-standing and complex interactions involving many process instances remained di±cult to model. Enterprise Service Buses (ESBs), followed, connecting process models to heterogeneous forms of middleware. ESBs, however, generally forced modellers to choose a particular underlying middleware and to stick to it, despite their ability to connect with many forms of middleware. Furthermore ESBs encourage process integrations to be modelled on their own, logically separate from the process model. This can lead to the inability to reason about long standing conversations at the process layer. Technologies and languages for process integration generally lack formality. This has led to arbitrariness in the underlying language building blocks. Conceptual holes exist in a range of technologies and languages for process integration and this can lead to customer dissatisfaction and failure to bring integration projects to reach their potential. Standards for process integration share similar fundamental flaws to languages and technologies. Standards are also in direct competition with other standards causing a lack of clarity. Thus the area of greatest risk in a BPM project remains process integration, despite major advancements in the technology base. This research examines some fundamental aspects of communication middleware and how these fundamental building blocks of integration can be brought to the process modelling layer in a technology agnostic manner. This way process modelling can be conceptually complete without becoming stuck in a particular middleware technology. Coloured Petri nets are used to define a formal semantics for the fundamental aspects of communication middleware. They provide the means to define and model the dynamic aspects of various integration middleware. Process integration patterns are used as a tool to codify common problems to be solved. Object Role Modelling is a formal modelling technique that was used to define the syntax of a proposed process integration language. This thesis provides several contributions to the field of process integration. It proposes a framework defining the key notions of integration middleware. This framework provides a conceptual foundation upon which a process integration language could be built. The thesis defines an architecture that allows various forms of middleware to be aggregated and reasoned about at the process layer. This thesis provides a comprehensive set of process integration patterns. These constitute a benchmark for the kinds of problems a process integration language must support. The thesis proposes a process integration modelling language and a partial implementation that is able to enact the language. A process integration pilot project in a German hospital is brie°y described at the end of the thesis. The pilot is based on ideas in this thesis.
Resumo:
Crisis holds the potential for profound change in organizations and industries. The past 50 years of crisis management highlight key shifts in crisis practice, creating opportunities for multiple theories and research tracks. Defining crises such as Tylenol, Exxon Valdez, and September 11 terrorist attacks have influenced or challenged the principles of best practice of crisis communication in public relations. This study traces the development of crisis process and practice by identifying shifts in crisis research and models and mapping these against key management theories and practices. The findings define three crisis domains: crisis planning, building and testing predictive models, and mapping and measuring external environmental influences. These crisis domains mirror but lag the evolution of management theory, suggesting challenges for researchers to reshape the research agenda to close the gap and lead the next stage of development in the field of crisis communication for effective organizational outcomes.
Resumo:
For the analysis of material nonlinearity, an effective shear modulus approach based on the strain control method is proposed in this paper by using point collocation method. Hencky’s total deformation theory is used to evaluate the effective shear modulus, Young’s modulus and Poisson’s ratio, which are treated as spatial field variables. These effective properties are obtained by the strain controlled projection method in an iterative manner. To evaluate the second order derivatives of shape function at the field point, the radial basis function (RBF) in the local support domain is used. Several numerical examples are presented to demonstrate the efficiency and accuracy of the proposed method and comparisons have been made with analytical solutions and the finite element method (ABAQUS).
Resumo:
Providing effective IT support for business processes has become crucial for enterprises to stay competitive. In response to this need numerous process support paradigms (e.g., workflow management, service flow management, case handling), process specification standards (e.g., WS-BPEL, BPML, BPMN), process tools (e.g., ARIS Toolset, Tibco Staffware, FLOWer), and supporting methods have emerged in recent years. Summarized under the term “Business Process Management” (BPM), these paradigms, standards, tools, and methods have become a success-critical instrument for improving process performance.
Resumo:
This research is one of several ongoing studies conducted within the IT Professional Services (ITPS) research programme at Queensland University of Technology (QUT). In 2003, ITPS introduced the IS-Impact model, a measurement model for measuring information systems success from the viewpoint of multiple stakeholders. The model, along with its instrument, is robust, simple, yet generalisable, and yields results that are comparable across time, stakeholders, different systems and system contexts. The IS-Impact model is defined as “a measure at a point in time, of the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. The model represents four dimensions, which are ‘Individual Impact’, ‘Organizational Impact’, ‘Information Quality’ and ‘System Quality’. The two Impact dimensions measure the up-to-date impact of the evaluated system, while the remaining two Quality dimensions act as proxies for probable future impacts (Gable, Sedera & Chan, 2008). To fulfil the goal of ITPS, “to develop the most widely employed model” this research re-validates and extends the IS-Impact model in a new context. This method/context-extension research aims to test the generalisability of the model by addressing known limitations of the model. One of the limitations of the model relates to the extent of external validity of the model. In order to gain wide acceptance, a model should be consistent and work well in different contexts. The IS-Impact model, however, was only validated in the Australian context, and packaged software was chosen as the IS understudy. Thus, this study is concerned with whether the model can be applied in another different context. Aiming for a robust and standardised measurement model that can be used across different contexts, this research re-validates and extends the IS-Impact model and its instrument to public sector organisations in Malaysia. The overarching research question (managerial question) of this research is “How can public sector organisations in Malaysia measure the impact of information systems systematically and effectively?” With two main objectives, the managerial question is broken down into two specific research questions. The first research question addresses the applicability (relevance) of the dimensions and measures of the IS-Impact model in the Malaysian context. Moreover, this research question addresses the completeness of the model in the new context. Initially, this research assumes that the dimensions and measures of the IS-Impact model are sufficient for the new context. However, some IS researchers suggest that the selection of measures needs to be done purposely for different contextual settings (DeLone & McLean, 1992, Rai, Lang & Welker, 2002). Thus, the first research question is as follows, “Is the IS-Impact model complete for measuring the impact of IS in Malaysian public sector organisations?” [RQ1]. The IS-Impact model is a multidimensional model that consists of four dimensions or constructs. Each dimension is represented by formative measures or indicators. Formative measures are known as composite variables because these measures make up or form the construct, or, in this case, the dimension in the IS-Impact model. These formative measures define different aspects of the dimension, thus, a measurement model of this kind needs to be tested not just on the structural relationship between the constructs but also the validity of each measure. In a previous study, the IS-Impact model was validated using formative validation techniques, as proposed in the literature (i.e., Diamantopoulos and Winklhofer, 2001, Diamantopoulos and Siguaw, 2006, Petter, Straub and Rai, 2007). However, there is potential for improving the validation testing of the model by adding more criterion or dependent variables. This includes identifying a consequence of the IS-Impact construct for the purpose of validation. Moreover, a different approach is employed in this research, whereby the validity of the model is tested using the Partial Least Squares (PLS) method, a component-based structural equation modelling (SEM) technique. Thus, the second research question addresses the construct validation of the IS-Impact model; “Is the IS-Impact model valid as a multidimensional formative construct?” [RQ2]. This study employs two rounds of surveys, each having a different and specific aim. The first is qualitative and exploratory, aiming to investigate the applicability and sufficiency of the IS-Impact dimensions and measures in the new context. This survey was conducted in a state government in Malaysia. A total of 77 valid responses were received, yielding 278 impact statements. The results from the qualitative analysis demonstrate the applicability of most of the IS-Impact measures. The analysis also shows a significant new measure having emerged from the context. This new measure was added as one of the System Quality measures. The second survey is a quantitative survey that aims to operationalise the measures identified from the qualitative analysis and rigorously validate the model. This survey was conducted in four state governments (including the state government that was involved in the first survey). A total of 254 valid responses were used in the data analysis. Data was analysed using structural equation modelling techniques, following the guidelines for formative construct validation, to test the validity and reliability of the constructs in the model. This study is the first research that extends the complete IS-Impact model in a new context that is different in terms of nationality, language and the type of information system (IS). The main contribution of this research is to present a comprehensive, up-to-date IS-Impact model, which has been validated in the new context. The study has accomplished its purpose of testing the generalisability of the IS-Impact model and continuing the IS evaluation research by extending it in the Malaysian context. A further contribution is a validated Malaysian language IS-Impact measurement instrument. It is hoped that the validated Malaysian IS-Impact instrument will encourage related IS research in Malaysia, and that the demonstrated model validity and generalisability will encourage a cumulative tradition of research previously not possible. The study entailed several methodological improvements on prior work, including: (1) new criterion measures for the overall IS-Impact construct employed in ‘identification through measurement relations’; (2) a stronger, multi-item ‘Satisfaction’ construct, employed in ‘identification through structural relations’; (3) an alternative version of the main survey instrument in which items are randomized (rather than blocked) for comparison with the main survey data, in attention to possible common method variance (no significant differences between these two survey instruments were observed); (4) demonstrates a validation process of formative indexes of a multidimensional, second-order construct (existing examples mostly involved unidimensional constructs); (5) testing the presence of suppressor effects that influence the significance of some measures and dimensions in the model; and (6) demonstrates the effect of an imbalanced number of measures within a construct to the contribution power of each dimension in a multidimensional model.
Resumo:
Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several differ- ent approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational struc- tures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.
Resumo:
Nowadays, business process management is an important approach for managing organizations from an operational perspective. As a consequence, it is common to see organizations develop collections of hundreds or even thousands of business process models. Such large collections of process models bring new challenges and provide new opportunities, as the knowledge that they encapsulate requires to be properly managed. Therefore, a variety of techniques for managing large collections of business process models is being developed. The goal of this paper is to provide an overview of the management techniques that currently exist, as well as the open research challenges that they pose.