900 resultados para IT Process Value
Resumo:
Purpose – Context-awareness has emerged as an important principle in the design of flexible business processes. The goal of the research is to develop an approach to extend context-aware business process modeling toward location-awareness. The purpose of this paper is to identify and conceptualize location-dependencies in process modeling. Design/methodology/approach – This paper uses a pattern-based approach to identify location-dependency in process models. The authors design specifications for these patterns. The authors present illustrative examples and evaluate the identified patterns through a literature review of published process cases. Findings – This paper introduces location-awareness as a new perspective to extend context-awareness in BPM research, by introducing relevant location concepts such as location-awareness and location-dependencies. The authors identify five basic location-dependent control-flow patterns that can be captured in process models. And the authors identify location-dependencies in several existing case studies of business processes. Research limitations/implications – The authors focus exclusively on the control-flow perspective of process models. Further work needs to extend the research to address location-dependencies in process data or resources. Further empirical work is needed to explore determinants and consequences of the modeling of location-dependencies. Originality/value – As existing literature mostly focusses on the broad context of business process, location in process modeling still is treated as “second class citizen” in theory and in practice. This paper discusses the vital role of location-dependencies within business processes. The proposed five basic location-dependent control-flow patterns are novel and useful to explain location-dependency in business process models. They provide a conceptual basis for further exploration of location-awareness in the management of business processes.
Resumo:
Empirical evidence shows that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication arises for example when the repository covers multiple variants of the same processes or due to copy-pasting. Previous work has addressed the problem of efficiently retrieving exact clones that can be refactored into shared subprocess models. This article studies the broader problem of approximate clone detection in process models. The article proposes techniques for detecting clusters of approximate clones based on two well-known clustering algorithms: DBSCAN and Hi- erarchical Agglomerative Clustering (HAC). The article also defines a measure of standardizability of an approximate clone cluster, meaning the potential benefit of replacing the approximate clones with a single standardized subprocess. Experiments show that both techniques, in conjunction with the proposed standardizability measure, accurately retrieve clusters of approximate clones that originate from copy-pasting followed by independent modifications to the copied fragments. Additional experiments show that both techniques produce clusters that match those produced by human subjects and that are perceived to be standardizable.
Resumo:
Few would disagree that the upstream oil & gas industry has become more technology-intensive over the years. But how does innovation happen in the industry? Specifically, what ideas and inputs flow from which parts of the sector׳s value network, and where do these inputs go? And how do firms and organizations from different countries contribute differently to this process? This paper puts forward the results of a survey designed to shed light on these questions. Carried out in collaboration with the Society of Petroleum Engineers (SPE), the survey was sent to 469 executives and senior managers who played a significant role with regard to R&D and/or technology deployment in their respective business units. A total of 199 responses were received from a broad range of organizations and countries around the world. Several interesting themes and trends emerge from the results, including: (1) service companies tend to file considerably more patents per innovation than other types of organization; (2) over 63% of the deployed innovations reported in the survey originated in service companies; (3) neither universities nor government-led research organizations were considered to be valuable sources of new information and knowledge in the industry׳s R&D initiatives, and; (4) despite the increasing degree of globalization in the marketplace, the USA still plays an extremely dominant role in the industry׳s overall R&D and technology deployment activities. By providing a detailed and objective snapshot of how innovation happens in the upstream oil & gas sector, this paper provides a valuable foundation for future investigations and discussions aimed at improving how R&D and technology deployment are managed within the industry. The methodology did result in a coverage bias within the survey, however, and the limitations arising from this are explored.
Resumo:
There is a need for a more critical perspective and reporting about the value of taking a model of inclusion developed in western countries and based upon the human rights ethos applying it in developing countries. This chapter will report firstly on how the Index for Inclusion (hereinafter referred to as the Index) was used in Australia as a tool for review and development; and secondly how the process of using the Index is adjusted for use in the Pacific Islands and other developing nations in collaborative and culturally sensitive ways to support and evaluate progress towards inclusive education. Examples are provided from both contexts to demonstrate the impact of the Index as an effective tool to support a more inclusive response to diversity in schools.
Resumo:
Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.
Resumo:
The functions of the volunteer functions inventory were combined with the constructs of the theory of planned behaviour (i.e., attitudes, subjective norms, and perceived behavioural control) to establish whether a stronger, single explanatory model prevailed. Undertaken in the context of episodic, skilled volunteering by individuals who were retired or approaching retirement (N = 186), the research advances on prior studies which either examined the predictive capacity of each model independently or compared their explanatory value. Using hierarchical regression analysis, the functions of the volunteer functions inventory (when controlling for demographic variables) explained an additional 7.0% of variability in individuals’ willingness to volunteer over and above that accounted for by the theory of planned behaviour. Significant predictors in the final model included attitudes, subjective norms and perceived behavioural control from the theory of planned behaviour and the understanding function from the volunteer functions inventory. It is proposed that the items comprising the understanding function may represent a deeper psychological construct (e.g., self-actualisation) not accounted for by the theory of planned behaviour. The findings highlight the potential benefit of combining these two prominent models in terms of improving understanding of volunteerism and providing a single parsimonious model for raising rates of this important behaviour.
Resumo:
Today’s information systems log vast amounts of data. These collections of data (implicitly) describe events (e.g. placing an order or taking a blood test) and, hence, provide information on the actual execution of business processes. The analysis of such data provides an excellent starting point for business process improvement. This is the realm of process mining, an area which has provided a repertoire of many analysis techniques. Despite the impressive capabilities of existing process mining algorithms, dealing with the abundance of data recorded by contemporary systems and devices remains a challenge. Of particular importance is the capability to guide the meaningful interpretation of “oceans of data” by process analysts. To this end, insights from the field of visual analytics can be leveraged. This article proposes an approach where process states are reconstructed from event logs and visualised in succession, leading to an animated history of a process. This approach is customisable in how a process state, partially defined through a collection of activity instances, is visualised: one can select a map and specify a projection of events on this map based on the properties of the events. This paper describes a comprehensive implementation of the proposal. It was realised using the open-source process mining framework ProM. Moreover, this paper also reports on an evaluation of the approach conducted with Suncorp, one of Australia’s largest insurance companies.
Resumo:
In 2007 some of us were fortunate enough to be in Dundee for the Royal College of Nursing’s Annual International Nursing Research Conference. A highlight of that conference was an enactment of the process and context debate. The chair asked for volunteers and various members of the audience came forward giving the impression that they were nurses and that it was a chance selection. The audience accepted these individuals as their representatives and once they had gathered on stage we all expected the debate to begin. But the large number of researchers in the audience gave little thought to the selection and recruitment process they had just witnessed. Then the selected representatives stood up and sang A cappella. Suddenly the context was different and we questioned the process. The point was made: process or context, or both?
Resumo:
Purpose – The purpose of this paper is to examine empirically, an industry development paradox, using embryonic literature in the area of strategic supply chain management, together with innovation management literature. This study seeks to understand how, forming strategic supply chain relationships, and developing strategic supply chain capability, influences beneficial supply chain outcomes expected from utilizing industry-led innovation, in the form of electronic business solutions using the internet, in the Australian beef industry. Findings should add valuable insights to both academics and practitioners in the fields of supply chain innovation management and strategic supply chain management, and expand knowledge to current literature. Design/methodology/approach – This is a quantitative study comparing innovative and non-innovative supply chain operatives in the Australian beef industry, through factor analysis and structural equation modeling using PAWS Statistical V18 and AMOS V18 to analyze survey data from 412 respondents from the Australian beef supply chain. Findings – Key findings are that both innovative and non-innovative supply chain operators attribute supply chain synchronization as only a minor indicator of strategic supply chain capability, contrary to the literature; and they also indicate strategic supply chain capability has a minor influence in achieving beneficial outcomes from utilizing industry-led innovation. These results suggest a lack of coordination between supply chain operatives in the industry. They also suggest a lack of understanding of the benefits of developing a strategic supply chain management competence, particularly in relation to innovation agendas, and provides valuable insights as to why an industry paradox exists in terms of the level of investment in industry-led innovation, vs the level of corresponding benefit achieved. Research limitations/implications – Results are not generalized due to the single agribusiness industry studied and the single research method employed. However, this provides opportunity for further agribusiness studies in this area and also studies using alternate methods, such as qualitative, in-depth analysis of these factors and their relationships, which may confirm results or produce different results. Further, this study empirically extends existing theoretical contributions and insights into the roles of strategic supply chain management and innovation management in improving supply chain and ultimately industry performance while providing practical insights to supply chain practitioners in this and other similar agribusiness industries. Practical implications – These findings confirm results from a 2007 research (Ketchen et al., 2007) which suggests supply chain practice and teachings need to take a strategic direction in the twenty-first century. To date, competence in supply chain management has built up from functional and process orientations rather than from a strategic perspective. This study confirms that there is a need for more generalists that can integrate with various disciplines, particularly those who can understand and implement strategic supply chain management. Social implications – Possible social implications accrue through the development of responsible government policy in terms of industry supply chains. Strategic supply chain management and supply chain innovation management have impacts to the social fabric of nations through the sustainability of their industries, especially agribusiness industries which deal with food safety and security. If supply chains are now the competitive weapon of nations then funding innovation and managing their supply chain competitiveness in global markets requires a strategic approach from everyone, not just the industry participants. Originality/value – This is original empirical research, seeking to add value to embryonic and important developing literature concerned with adopting a strategic approach to supply chain management. It also seeks to add to existing literature in the area of innovation management, particularly through greater understanding of the implications of nations developing industry-wide, industry-led innovation agendas, and their ramifications to industry supply chains.
Resumo:
In-memory databases have become a mainstay of enterprise computing offering significant performance and scalability boosts for online analytical and (to a lesser extent) transactional processing as well as improved prospects for integration across different applications through an efficient shared database layer. Significant research and development has been undertaken over several years concerning data management considerations of in-memory databases. However, limited insights are available on the impacts of applications and their supportive middleware platforms and how they need to evolve to fully function through, and leverage, in-memory database capabilities. This paper provides a first, comprehensive exposition into how in-memory databases impact Business Pro- cess Management, as a mission-critical and exemplary model-driven integration and orchestration middleware. Through it, we argue that in-memory databases will render some prevalent uses of legacy BPM middleware obsolete, but also open up exciting possibilities for tighter application integration, better process automation performance and some entirely new BPM capabilities such as process-based application customization. To validate the feasibility of an in-memory BPM, we develop a surprisingly simple BPM runtime embedded into SAP HANA and providing for BPMN-based process automation capabilities.
Resumo:
In this study, the biodiesel properties and effects of blends of oil methyl ester petroleum diesel on a CI direct injection diesel engine is investigated. Blends were obtained from the marine dinoflagellate Crypthecodinium cohnii and waste cooking oil. The experiment was conducted using a four-cylinder, turbo-charged common rail direct injection diesel engine at four loads (25%, 50%, 75% and 100%). Three blends (10%, 20% and 50%) of microalgae oil methyl ester and a 20% blend of waste cooking oil methyl ester were compared to petroleum diesel. To establish suitability of the fuels for a CI engine, the effects of the three microalgae fuel blends at different engine loads were assessed by measuring engine performance, i.e. mean effective pressure (IMEP), brake mean effective pressure (BMEP), in cylinder pressure, maximum pressure rise rate, brake-specific fuel consumption (BSFC), brake thermal efficiency (BTE), heat release rate and gaseous emissions (NO, NOx,and unburned hydrocarbons (UHC)). Results were then compared to engine performance characteristics for operation with a 20% waste cooking oil/petroleum diesel blend and petroleum diesel. In addition, physical and chemical properties of the fuels were measured. Use of microalgae methyl ester reduced the instantaneous cylinder pressure and engine output torque, when compared to that of petroleum diesel, by a maximum of 4.5% at 50% blend at full throttle. The lower calorific value of the microalgae oil methyl ester blends increased the BSFC, which ultimately reduced the BTE by up to 4% at higher loads. Minor reductions of IMEP and BMEP were recorded for both the microalgae and the waste cooking oil methyl ester blends at low loads, with a maximum of 7% reduction at 75% load compared to petroleum diesel. Furthermore, compared to petroleum diesel, gaseous emissions of NO and NOx, increased for operations with biodiesel blends. At full load, NO and NOx emissions increased by 22% when 50% microalgae blends were used. Petroleum diesel and a 20% blend of waste cooking oil methyl ester had emissions of UHC that were similar, but those of microalgae oil methyl ester/petroleum diesel blends were reduced by at least 50% for all blends and engine conditions. The tested microalgae methyl esters contain some long-chain, polyunsaturated fatty acid methyl esters (FAMEs) (C22:5 and C22:6) not commonly found in terrestrial-crop-derived biodiesels yet all fuel properties were satisfied or were very close to the ASTM 6751-12 and EN14214 standards. Therefore, Crypthecodinium cohnii- derived microalgae biodiesel/petroleum blends of up to 50% are projected to meet all fuel property standards and, engine performance and emission results from this study clearly show its suitability for regular use in diesel engines.
Resumo:
Accurate process model elicitation continues to be a time consuming task, requiring skill on the part of the interviewer to extract explicit and tacit process information from the interviewee. Many errors occur in this elicitation stage that would be avoided by better activity recall, more consistent specification methods and greater engagement in the elicitation process by interviewees. Metasonic GmbH has developed a process elicitation tool for their process suite. As part of a research engagement with Metasonic, staff from QUT, Australia have developed a 3D virtual world approach to the same problem, viz. eliciting process models from stakeholders in an intuitive manner. This book chapter tells the story of how QUT staff developed a 3D Virtual World tool for process elicitation, took the outcomes of their research project to Metasonic for evaluation, and finally, Metasonic’s response to the initial proof of concept.
Resumo:
Background: A major challenge for assessing students’ conceptual understanding of STEM subjects is the capacity of assessment tools to reliably and robustly evaluate student thinking and reasoning. Multiple-choice tests are typically used to assess student learning and are designed to include distractors that can indicate students’ incomplete understanding of a topic or concept based on which distractor the student selects. However, these tests fail to provide the critical information uncovering the how and why of students’ reasoning for their multiple-choice selections. Open-ended or structured response questions are one method for capturing higher level thinking, but are often costly in terms of time and attention to properly assess student responses. Purpose: The goal of this study is to evaluate methods for automatically assessing open-ended responses, e.g. students’ written explanations and reasoning for multiple-choice selections. Design/Method: We incorporated an open response component for an online signals and systems multiple-choice test to capture written explanations of students’ selections. The effectiveness of an automated approach for identifying and assessing student conceptual understanding was evaluated by comparing results of lexical analysis software packages (Leximancer and NVivo) to expert human analysis of student responses. In order to understand and delineate the process for effectively analysing text provided by students, the researchers evaluated strengths and weakness for both the human and automated approaches. Results: Human and automated analyses revealed both correct and incorrect associations for certain conceptual areas. For some questions, that were not anticipated or included in the distractor selections, showing how multiple-choice questions alone fail to capture the comprehensive picture of student understanding. The comparison of textual analysis methods revealed the capability of automated lexical analysis software to assist in the identification of concepts and their relationships for large textual data sets. We also identified several challenges to using automated analysis as well as the manual and computer-assisted analysis. Conclusions: This study highlighted the usefulness incorporating and analysing students’ reasoning or explanations in understanding how students think about certain conceptual ideas. The ultimate value of automating the evaluation of written explanations is that it can be applied more frequently and at various stages of instruction to formatively evaluate conceptual understanding and engage students in reflective
Resumo:
This chapter sets out to identify patterns at play in boardroom discussions around the design and adoption of an accountability system in a nonprofit organisation. To this end, it contributes to the scarce literature showing the backstage of management accounting systems (Berry, 2005), investment policy determining (Kreander, Beattie & McPhail, 2009; Kreander, McPhail & Molyneaux, 2004) and financial planning strategizing (Parker, 2004) or budgeting (Irvine 2005). The paucity of publications is due to issues raised by confidentiality preventing attendance at those meetings (Irvine, 2003), Irvine & Gaffikin, 2006). However, often, the implementation of a new control technology occurs over a long period of time that might exceed the duration of a research project (Quattrone & Hopper, 2001, 2005). Recent trends consisting of having research funded by grants from private institutions or charities have tended to reduce the length of such undertakings to a few months or rarely more than a couple of years (Parker, 2013);
Resumo:
As Business Process Management (BPM) is evolving and organisations are becoming more process oriented, the need for Expertise in BPM amongst practitioners has increased. Proactively managing Expertise in BPM is essential to unlock the potential of BPM as a management paradigm and competitive advantage. Whilst great attention is being paid by the BPM community to the technological aspects of BPM, relatively little research or work has been done concerning the expertise aspect of BPM. There is a substantial body of knowledge on expertise itself, however there is no common framework in existence at the time of writing, describing the fundamental attributes characterising Expertise in the illustrative context of BPM. There are direct implications of the understanding and characterisation of Expertise in the context of BPM as a key strategic component and success factor of BPM itself, as well as for those involved in BPM. Expertise in the context of BPM needs to be characterised to understand it, and be able to proactively manage it. Given the relative infancy of research into Expertise in the context of BPM, an exploration of the relevance and importance of Expertise in the context of BPM was considered essential, to ensure the study itself was of value to the BPM field. The aims of this research are firstly to address the two research questions 'why is expertise important and relevant in the context of BPM?', and 'how can Expertise in the context of BPM be characterised?', and secondly, the development of a comprehensive and validated A-priori model characterising Expertise in the illustrative context of BPM. The study is theory-guided. It has been undertaken via an extensive literature review across relevant literature domains, and a revelatory case study utilising several methods: informal discussions, an open-ended survey, and participant observation. An a-priori model was then developed which comprised of several Constructs and Sub-constructs, and several overall aspects of Expertise in BPM. This was followed by the conduct of interviews in the validation phase of the revelatory case study. The primary contributions of this study are to the fields of expertise, BPM and research. Contributions to the field of expertise include a comprehensive review of expertise literature in general and synthesised critique on expertise research, characterisation of expertise in an illustrative context as a system, and a comprehensive narrative of the dynamics and interrelationships of the core attributes characterising expertise. Contributions to the field of BPM include firstly, the establishment of the importance of understanding Expertise in the context of BPM, including a comprehensive overview of the role the relevance and importance of Expertise in the context of BPM, through explanation of the effect of Expertise in BPM. Secondly, a model characterising Expertise in the context of BPM, which can be used by BPM practitioners to clearly articulate and illuminate the state of Expertise in BPM in organisations. Contributions to the field of research include an extended view of Systems Theory developed, reflecting the importance of the system context in systems thinking, and a narrative on ontological innovation through the positioning of ontology as a meta-model of Expertise in the context of BPM.