860 resultados para Analytic Hierarchy Process (AHP)
Resumo:
The absence of qualitative analysis in mainstream research on eating disorders is discussed in the following article as being a weakness in developing theory and clinical practice. This article includes an analysis of interviews with British healthcare workers who manage anorexic patients. This analysis presents an example of qualitative methodology in the form of discourse analysis, which is argued to provide a systematic, yet flexible approach to research on eating disorders. The overwhelming prevalence of anorexia nervosa in women is specifically examined within the context of the identification of the "discourse of femininity. " The research findings are discussed in relation to the use of discursive practices that contribute to the maintenance and reproduction of clinical processes and their relative efficacy.
Resumo:
Video presented as part of APCCM 2010 conference in Brisbane Australia. Video illustrating the main components of an Open Simulator BPMN Editor we have developed.
Resumo:
Video presented as part of ACIS 2009 conference in Melbourne Australia. Movie showing the execution of a small prototype Hypbolic projection of a process model. Useful for the traversal of large process models, as the entire hierarchy can be visualised as a whole, maintaining a sense of context while moving through such complex topologies. Related ACIS Conference paper is at: http://eprints.qut.edu.au/29296/
Resumo:
Video presented as part of Smart Services CRC Participants meeting. A short demonstration video of our ideas for using Business Process Software in Virtual Worlds for Process Education.
Resumo:
Situated on Youtube, and shown in various locations. A video showing members of the QUT BPM research group using a Mimio pen-based tabletop system for collaborative process modelling.
Resumo:
Predicate encryption (PE) is a new primitive which supports exible control over access to encrypted data. In PE schemes, users' decryption keys are associated with predicates f and ciphertexts encode attributes a that are specified during the encryption procedure. A user can successfully decrypt if and only if f(a) = 1. In this thesis, we will investigate several properties that are crucial to PE. We focus on expressiveness of PE, Revocable PE and Hierarchical PE (HPE) with forward security. For all proposed systems, we provide a security model and analysis using the widely accepted computational complexity approach. Our first contribution is to explore the expressiveness of PE. Existing PE supports a wide class of predicates such as conjunctions of equality, comparison and subset queries, disjunctions of equality queries, and more generally, arbitrary combinations of conjunctive and disjunctive equality queries. We advance PE to evaluate more expressive predicates, e.g., disjunctive comparison or disjunctive subset queries. Such expressiveness is achieved at the cost of computational and space overhead. To improve the performance, we appropriately revise the PE to reduce the computational and space cost. Furthermore, we propose a heuristic method to reduce disjunctions in the predicates. Our schemes are proved in the standard model. We then introduce the concept of Revocable Predicate Encryption (RPE), which extends the previous PE setting with revocation support: private keys can be used to decrypt an RPE ciphertext only if they match the decryption policy (defined via attributes encoded into the ciphertext and predicates associated with private keys) and were not revoked by the time the ciphertext was created. We propose two RPE schemes. Our first scheme, termed Attribute- Hiding RPE (AH-RPE), offers attribute-hiding, which is the standard PE property. Our second scheme, termed Full-Hiding RPE (FH-RPE), offers even stronger privacy guarantees, i.e., apart from possessing the Attribute-Hiding property, the scheme also ensures that no information about revoked users is leaked from a given ciphertext. The proposed schemes are also proved to be secure under well established assumptions in the standard model. Secrecy of decryption keys is an important pre-requisite for security of (H)PE and compromised private keys must be immediately replaced. The notion of Forward Security (FS) reduces damage from compromised keys by guaranteeing confidentiality of messages that were encrypted prior to the compromise event. We present the first Forward-Secure Hierarchical Predicate Encryption (FS-HPE) that is proved secure in the standard model. Our FS-HPE scheme offers some desirable properties: time-independent delegation of predicates (to support dynamic behavior for delegation of decrypting rights to new users), local update for users' private keys (i.e., no master authority needs to be contacted), forward security, and the scheme's encryption process does not require knowledge of predicates at any level including when those predicates join the hierarchy.
Resumo:
Existing compliance management frameworks (CMFs) offer a multitude of compliance management capabilities that makes difficult for enterprises to decide on the suitability of a framework. Making a decision on the suitability requires a deep understanding of the functionalities of a framework. Gaining such an understanding is a difficult task which, in turn, requires specialised tools and methodologies for evaluation. Current compliance research lacks such tools and methodologies for evaluating CMFs. This paper reports a methodological evaluation of existing CMFs based on a pre-defined evaluation criteria. Our evaluation highlights what existing CMFs offer, and what they cannot. Also, it underpins various open questions and discusses the challenges in this direction.
Resumo:
This article describes the architecture of a monitoring component for the YAWL system. The architecture proposed is based on sensors and it is realized as a YAWL service to have perfect integration with the YAWL systems. The architecture proposed is generic and applicable in different contexts of business process monitoring. Finally, it was tested and evaluated in the context of risk monitoring for business processes.
Resumo:
This thesis reports on an investigation to develop an advanced and comprehensive milling process model of the raw sugar factory. Although the new model can be applied to both, the four-roller and six-roller milling units, it is primarily developed for the six-roller mills which are widely used in the Australian sugar industry. The approach taken was to gain an understanding of the previous milling process simulation model "MILSIM" developed at the University of Queensland nearly four decades ago. Although the MILSIM model was widely adopted in the Australian sugar industry for simulating the milling process it did have some incorrect assumptions. The study aimed to eliminate all the incorrect assumptions of the previous model and develop an advanced model that represents the milling process correctly and tracks the flow of other cane components in the milling process which have not been considered in the previous models. The development of the milling process model was done is three stages. Firstly, an enhanced milling unit extraction model (MILEX) was developed to access the mill performance parameters and predict the extraction performance of the milling process. New definitions for the milling performance parameters were developed and a complete milling train along with the juice screen was modelled. The MILEX model was validated with factory data and the variation in the mill performance parameters was observed and studied. Some case studies were undertaken to study the effect of fibre in juice streams, juice in cush return and imbibition% fibre on extraction performance of the milling process. It was concluded from the study that the empirical relations developed for the mill performance parameters in the MILSIM model were not applicable to the new model. New empirical relations have to be developed before the model is applied with confidence. Secondly, a soluble and insoluble solids model was developed using modelling theory and experimental data to track the flow of sucrose (pol), reducing sugars (glucose and fructose), soluble ash, true fibre and mud solids entering the milling train through the cane supply and their distribution in juice and bagasse streams.. The soluble impurities and mud solids in cane affect the performance of the milling train and further processing of juice and bagasse. New mill performance parameters were developed in the model to track the flow of cane components. The developed model is the first of its kind and provides some additional insight regarding the flow of soluble and insoluble cane components and the factors affecting their distribution in juice and bagasse. The model proved to be a good extension to the MILEX model to study the overall performance of the milling train. Thirdly, the developed models were incorporated in a proprietary software package "SysCAD’ for advanced operational efficiency and for availability in the ‘whole of factory’ model. The MILEX model was developed in SysCAD software to represent a single milling unit. Eventually the entire milling train and the juice screen were developed in SysCAD using series of different controllers and features of the software. The models developed in SysCAD can be run from macro enabled excel file and reports can be generated in excel sheets. The flexibility of the software, ease of use and other advantages are described broadly in the relevant chapter. The MILEX model is developed in static mode and dynamic mode. The application of the dynamic mode of the model is still under progress.
Resumo:
Conceptual modelling supports developers and users of information systems in areas of documentation, analysis or system redesign. The ongoing interest in the modelling of business processes has led to a variety of different grammars, raising the question of the quality of these grammars for modelling. An established way of evaluating the quality of a modelling grammar is by means of an ontological analysis, which can determine the extent to which grammars contain construct deficit, overload, excess or redundancy. While several studies have shown the relevance of most of these criteria, predictions about construct redundancy have yielded inconsistent results in the past, with some studies suggesting that redundancy may even be beneficial for modelling in practice. In this paper we seek to contribute to clarifying the concept of construct redundancy by introducing a revision to the ontological analysis method. Based on the concept of inheritance we propose an approach that distinguishes between specialized and distinct construct redundancy. We demonstrate the potential explanatory power of the revised method by reviewing and clarifying previous results found in the literature.
Resumo:
Risk identification is one of the most challenging stages in the risk management process. Conventional risk management approaches provide little guidance and companies often rely on the knowledge of experts for risk identification. In this paper we demonstrate how risk indicators can be used to predict process delays via a method for configuring so-called Process Risk Indicators(PRIs). The method learns suitable configurations from past process behaviour recorded in event logs. To validate the approach we have implemented it as a plug-in of the ProM process mining framework and have conducted experiments using various data sets from a major insurance company.
Resumo:
The ability to steer business operations in alignment with the true origins of costs, and to be informed about this on a real-time basis, allows businesses to increase profitability. In most organisations however, high-level cost-based managerial decisions are still being made separately from process-related operational decisions. In this paper, we describe how process-related decisions at the operational level can be guided by cost considerations and how these cost-informed decision rules can be supported by a workflow management system. The paper presents the conceptual framework together with data requirements and technical challenges that need to be addressed to realise cost-informed workflow execution. The feasibility of our approach is demonstrated using a prototype implementation in the YAWL workflow environment.
Resumo:
Process mining encompasses the research area which is concerned with knowledge discovery from event logs. One common process mining task focuses on conformance checking, comparing discovered or designed process models with actual real-life behavior as captured in event logs in order to assess the “goodness” of the process model. This paper introduces a novel conformance checking method to measure how well a process model performs in terms of precision and generalization with respect to the actual executions of a process as recorded in an event log. Our approach differs from related work in the sense that we apply the concept of so-called weighted artificial negative events towards conformance checking, leading to more robust results, especially when dealing with less complete event logs that only contain a subset of all possible process execution behavior. In addition, our technique offers a novel way to estimate a process model’s ability to generalize. Existing literature has focused mainly on the fitness (recall) and precision (appropriateness) of process models, whereas generalization has been much more difficult to estimate. The described algorithms are implemented in a number of ProM plugins, and a Petri net conformance checking tool was developed to inspect process model conformance in a visual manner.
Resumo:
Business process management systems (BPMS) belong to a class of enterprise information systems that are characterized by the dependence on explicitly modeled process logic. Through the process logic, it is relatively easy to manage explicitly the routing and allocation of work items along a business process through the system. Inspired by the DeLone and McLean framework, we theorize that these process-aware system features are important attributes of system quality, which in turn will elevate key user evaluations such as perceived usefulness, and usage satisfaction. We examine this theoretical model using data collected from four different, mostly mature BPM system projects. Our findings validate the importance of input quality as well as allocation and routing attributes as antecedents of system quality, which, in turn, determines both usefulness and satisfaction with the system. We further demonstrate how service quality and workflow dependency are significant precursors to perceived usefulness. Our results suggest the appropriateness of a multi-dimensional conception of system quality for future research, and provide important design-oriented advice for the design and configuration of BPMSs.
Resumo:
The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.