952 resultados para Development Methodologies
Resumo:
Service oriented architecture (SOA) is an architectural style for building software systems based on services. Especially in those scenarios where services implement business processes, complex conversations between the services occur. Service choreographies are a means to capture all interaction obligations and constraints from a global perspective. This article introduces choreographies as an important artifact for SOA, compares them to service orchestrations and surveys existing languages for modeling them.
Resumo:
Accurate reliability prediction for large-scale, long lived engineering is a crucial foundation for effective asset risk management and optimal maintenance decision making. However, a lack of failure data for assets that fail infrequently, and changing operational conditions over long periods of time, make accurate reliability prediction for such assets very challenging. To address this issue, we present a Bayesian-Marko best approach to reliability prediction using prior knowledge and condition monitoring data. In this approach, the Bayesian theory is used to incorporate prior information about failure probabilities and current information about asset health to make statistical inferences, while Markov chains are used to update and predict the health of assets based on condition monitoring data. The prior information can be supplied by domain experts, extracted from previous comparable cases or derived from basic engineering principles. Our approach differs from existing hybrid Bayesian models which are normally used to update the parameter estimation of a given distribution such as the Weibull-Bayesian distribution or the transition probabilities of a Markov chain. Instead, our new approach can be used to update predictions of failure probabilities when failure data are sparse or nonexistent, as is often the case for large-scale long-lived engineering assets.
Resumo:
Building Web 2.0 sites does not necessarily ensure the success of the site. We aim to better understand what improves the success of a site by drawing insight from biologically inspired design patterns. Web 2.0 sites provide a mechanism for human interaction enabling powerful intercommunication between massive volumes of users. Early Web 2.0 site providers that were previously dominant are being succeeded by newer sites providing innovative social interaction mechanisms. Understanding what site traits contribute to this success drives research into Web sites mechanics using models to describe the associated social networking behaviour. Some of these models attempt to show how the volume of users provides a self-organising and self-contextualisation of content. One model describing coordinated environments is called stigmergy, a term originally describing coordinated insect behavior. This paper explores how exploiting stigmergy can provide a valuable mechanism for identifying and analysing online user behavior specifically when considering that user freedom of choice is restricted by the provided web site functionality. This will aid our building better collaborative Web sites improving the collaborative processes.
Resumo:
Software development and Web site development techniques have evolved significantly over the past 20 years. The relatively young Web Application development area has borrowed heavily from traditional software development methodologies primarily due to the similarities in areas of data persistence and User Interface (UI) design. Recent developments in this area propose a new Web Modeling Language (WebML) to facilitate the nuances specific to Web development. WebML is one of a number of implementations designed to enable modeling of web site interaction flows while being extendable to accommodate new features in Web site development into the future. Our research aims to extend WebML with a focus on stigmergy which is a biological term originally used to describe coordination between insects. We see design features in existing Web sites that mimic stigmergic mechanisms as part of the UI. We believe that we can synthesize and embed stigmergy in Web 2.0 sites. This paper focuses on the sub-topic of site UI design and stigmergic mechanism designs required to achieve this.
Resumo:
In this Column, I have decided to give you an insight into a both outward and forward looking area of BPM research; in fact, we want to discuss one of the streams of research and development that are yet to find a way of translation into practice. To that end, I have teamed up with colleagues and students here at Queensland University of Technology with whom I have shared some thinking and research around BPM in Virtual Environments (hence, BPMVE).
Resumo:
Product Lifecycle Management (PLM) systems are widely used in the manufacturing industry. A core feature of such systems is to provide support for versioning of product data. As workflow functionality is increasingly used in PLM systems, the possibility emerges that the versioning transitions for product objects as encapsulated in process models do not comply with the valid version control policies mandated in the objects’ actual lifecycles. In this paper we propose a solution to tackle the (non-)compliance issues between processes and object version control policies. We formally define the notion of compliance between these two artifacts in product lifecycle management and then develop a compliance checking method which employs a well-established workflow analysis technique. This forms the basis of a tool which offers automated support to the proposed approach. By applying the approach to a collection of real-life specifications in a main PLM system, we demonstrate the practical applicability of our solution to the field.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Empirical evidence shows that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication arises for example when the repository covers multiple variants of the same processes or due to copy-pasting. Previous work has addressed the problem of efficiently retrieving exact clones that can be refactored into shared subprocess models. This article studies the broader problem of approximate clone detection in process models. The article proposes techniques for detecting clusters of approximate clones based on two well-known clustering algorithms: DBSCAN and Hi- erarchical Agglomerative Clustering (HAC). The article also defines a measure of standardizability of an approximate clone cluster, meaning the potential benefit of replacing the approximate clones with a single standardized subprocess. Experiments show that both techniques, in conjunction with the proposed standardizability measure, accurately retrieve clusters of approximate clones that originate from copy-pasting followed by independent modifications to the copied fragments. Additional experiments show that both techniques produce clusters that match those produced by human subjects and that are perceived to be standardizable.
Resumo:
Organizational and technological systems analysis and design practices such as process modeling have received much attention in recent years. However, while knowledge about related artifacts such as models, tools, or grammars has substantially matured, little is known about the actual tasks and interaction activities that are conducted as part of analysis and design acts. In particular, key role of the facilitator has not been researched extensively to date. In this paper, we propose a new conceptual framework that can be used to examine facilitation behaviors in process modeling projects. The framework distinguishes four behavioral styles in facilitation (the driving engineer, the driving artist, the catalyzing engineer, and the catalyzing artist) that a facilitator can adopt. To distinguish between the four styles, we provide a set of ten behavioral anchors that underpin facilitation behaviors. We also report on a preliminary empirical exploration of our framework through interviews with experienced analysts in six modeling cases. Our research provides a conceptual foundation for an emerging theory for describing and explaining different behaviors associated with process modeling facilitation, provides first preliminary empirical results about facilitation in modeling projects, and provides a fertile basis for examining facilitation in other conceptual modeling activities.
Resumo:
Business processes are prone to continuous and unexpected changes. Process workers may start executing a process differently in order to adjust to changes in workload, season, guidelines or regulations for example. Early detection of business process changes based on their event logs – also known as business process drift detection – enables analysts to identify and act upon changes that may otherwise affect process performance. Previous methods for business process drift detection are based on an exploration of a potentially large feature space and in some cases they require users to manually identify the specific features that characterize the drift. Depending on the explored feature set, these methods may miss certain types of changes. This paper proposes a fully automated and statistically grounded method for detecting process drift. The core idea is to perform statistical tests over the distributions of runs observed in two consecutive time windows. By adaptively sizing the window, the method strikes a trade-off between classification accuracy and drift detection delay. A validation on synthetic and real-life logs shows that the method accurately detects typical change patterns and scales up to the extent it is applicable for online drift detection.