721 resultados para doping process
Resumo:
The safety of passengers is a major concern to airports. In the event of crises, having an effective and efficient evacuation process in place can significantly aid in enhancing passenger safety. Hence, it is necessary for airport operators to have an in-depth understanding of the evacuation process of their airport terminal. Although evacuation models have been used in studying pedestrian behaviour for decades, little research has been done in considering the evacuees’ group dynamics and the complexity of the environment. In this paper, an agent-based model is presented to simulate passenger evacuation process. Different exits were allocated to passengers based on their location and security level. The simulation results show that the evacuation time can be influenced by passenger group dynamics. This model also provides a convenient way to design airport evacuation strategy and examine its efficiency. The model was created using AnyLogic software and its parameters were initialised using recent research data published in the literature.
Resumo:
Several websites utilise a rule-base recommendation system, which generates choices based on a series of questionnaires, for recommending products to users. This approach has a high risk of customer attrition and the bottleneck is the questionnaire set. If the questioning process is too long, complex or tedious; users are most likely to quit the questionnaire before a product is recommended to them. If the questioning process is short; the user intensions cannot be gathered. The commonly used feature selection methods do not provide a satisfactory solution. We propose a novel process combining clustering, decisions tree and association rule mining for a group-oriented question reduction process. The question set is reduced according to common properties that are shared by a specific group of users. When applied on a real-world website, the proposed combined method outperforms the methods where the reduction of question is done only by using association rule mining or only by observing distribution within the group.
Resumo:
The measures by which major developments are officially approved for construction are - by common agreement - complex, time-consuming, and of questionable merit in terms of maintaining ecological viability.
Resumo:
This thesis presents novel techniques for addressing the problems of continuous change and inconsistencies in large process model collections. The developed techniques treat process models as a collection of fragments and facilitate version control, standardization and automated process model discovery using fragment-based concepts. Experimental results show that the presented techniques are beneficial in consolidating large process model collections, specifically when there is a high degree of redundancy.
Resumo:
In recent years, the imperative to communicate organisational impacts to a variety of stakeholders has gained increasing importance within all sectors. Despite growing external demands for evaluation and social impact measurement, there has been limited critically informed analysis about the presumed importance of these activities to organisational success and the practical challenges faced by organisations in undertaking such assessment. In this paper, we present the findings from an action research study of five Australian small to medium social enterprises’ practices and use of evaluation and social impact analysis. Our findings have implications for social enterprise operators, policy makers and social investors regarding when, why and at what level these activities contribute to organisational performance and the fulfilment of mission.
Resumo:
This paper critically evaluates the series of inquires that the Australian Labor government undertook during 2011-2013 into reform of Australian media, communications and copyright laws. One important driver of policy reform was the government’s commitment to building a National Broadband Network (NBN), and the implications this had for existing broadcasting and telecommunications policy, as it would constitute a major driver of convergence of media and communications access devices and content platforms. These inquiries included: the Convergence Review of media and communications legislation; the Australian Law Reform Commission (ALRC) review of the National Classification Scheme; and the Independent Media Inquiry (Finkelstein Review) into Media and Media Regulation. One unusual feature of this review process was the degree to which academics were involved in the process, not simply as providers of expert opinion, but as review chairs seconded from their universities. This paper considers the role played by activist groups in all of these inquiries and their relationship to the various participants in the inquiries, as well as the implications of academics being engaged in such inquiries, not simply as activist-scholars, but as those primarily responsible for delivering policy review outcomes. The paper draws upon the concept of "policy windows" in order to better understand the context in which the inquiries took place, and their relative lack of legislative impact.
Resumo:
Temporary Traffic Control Plans (TCP’s), which provide construction phasing to maintain traffic during construction operations, are integral component of highway construction project design. Using the initial design, designers develop estimated quantities for the required TCP devices that become the basis for bids submitted by highway contractors. However, actual as-built quantities are often significantly different from the engineer’s original estimate. The total cost of TCP phasing on highway construction projects amounts to 6–10% of the total construction cost. Variations between engineer estimated quantities and final quantities contribute to reduced cost control, increased chances of cost related litigations, and bid rankings and selection. Statistical analyses of over 2000 highway construction projects were performed to determine the sources of variation, which later were used as the basis of development for an automated-hybrid prediction model that uses multiple regressions and heuristic rules to provide accurate TCP quantities and costs. The predictive accuracy of the model developed was demonstrated through several case studies.
Resumo:
The present article gives an overview of the reversible addition fragmentation chain transfer (RAFT) process. RAFT is one of the most versatile living radical polymerization systems and yields polymers of predictable chain length and narrow molecular weight distribution. RAFT relies on the rapid exchange of thiocarbonyl thio groups between growing polymeric chains. The key strengths of the RAFT process for polymer design are its high tolerance of monomer functionality and reaction conditions, the wide range of well-controlled polymeric architectures achievable, and its (in-principle) non-rate-retarding nature. This article introduces the mechanism of polymerization, the range of polymer molecular weights achievable, the range of monomers in which polymerization is controlled by RAFT, the various polymeric architectures that can be obtained, the type of end-group functionalities available to RAFT-made polymers, and the process of RAFT polymerization.
Resumo:
IT resources are indispensable in the management of Public Sector Organizations (PSOs) around the world. We investigate the factors that could leverage the IT resources in PSOs in developing economies. While research on ways to leverage IT resources in private sector organizations of developed countries is substantial, our understanding on ways to leverage the IT resources in the public sector in developing countries is limited. The current study aspires to address this gap in the literature by seeking to determine the key factors required to create process value from public sector IT investments in developing countries. We draw on the resource-centric theories to imply the nature of factors that could leverage the IT resources in the public sector. Employing an interpretive design, we identified three factors necessary for IT process value generation in the public sector. We discuss these factors and state their implications to theory and practice.
Resumo:
The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook.
Resumo:
Nowadays, process management systems (PMSs) are widely used in many business scenarios, e.g. by government agencies, by insurance companies, and by banks. Despite this widespread usage, the typical application of such systems is predominantly in the context of static scenarios, instead of pervasive and highly dynamic scenarios. Nevertheless, pervasive and highly dynamic scenarios could also benefit from the use of PMSs.
Resumo:
Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.
Resumo:
Process Modeling is a widely used concept for understanding, documenting and also redesigning the operations of organizations. The validation and usage of process models is however affected by the fact that only business analysts fully understand them in detail. This is in particular a problem because they are typically not domain experts. In this paper, we investigate in how far the concept of verbalization can be adapted from object-role modeling to process models. To this end, we define an approach which automatically transforms BPMN process models into natural language texts and combines different techniques from linguistics and graph decomposition in a flexible and accurate manner. The evaluation of the technique is based on a prototypical implementation and involves a test set of 53 BPMN process models showing that natural language texts can be generated in a reliable fashion.
Resumo:
This article studies the problem of transforming a process model with an arbitrary topology into an equivalent well-structured process model. While this problem has received significant attention, there is still no full characterization of the class of unstructured process models that can be transformed into well-structured ones, nor an automated method for structuring any process model that belongs to this class. This article fills this gap in the context of acyclic process models. The article defines a necessary and sufficient condition for an unstructured acyclic process model to have an equivalent well-structured process model under fully concurrent bisimulation, as well as a complete structuring method. The method has been implemented as a tool that takes process models captured in the BPMN and EPC notations as input. The article also reports on an empirical evaluation of the structuring method using a repository of process models from commercial practice.
Resumo:
Determining similarity between business process models has recently gained interest in the business process management community. So far similarity was addressed separately either at semantic or structural aspect of process models. Also, most of the contributions that measure similarity of process models assume an ideal case when process models are enriched with semantics - a description of meaning of process model elements. However, in real life this results in a heavy human effort consuming pre-processing phase which is often not feasible. In this paper we propose an automated approach for querying a business process model repository for structurally and semantically relevant models. Similar to the search on the Internet, a user formulates a BPMN-Q query and as a result receives a list of process models ordered by relevance to the query. We provide a business process model search engine implementation for evaluation of the proposed approach.