718 resultados para Process repository
Resumo:
Reducing complexity in Information Systems is a main concern in both research and industry. One strategy for reducing complexity is separation of concerns. This strategy advocates separating various concerns, like security and privacy, from the main concern. It results in less complex, easily maintainable, and more reusable Information Systems. Separation of concerns is addressed through the Aspect Oriented paradigm. This paradigm has been well researched and implemented in programming, where languages such as AspectJ have been developed. However, the rsearch on aspect orientation for Business Process Management is still at its beginning. While some efforts have been made proposing Aspect Oriented Business Process Modelling, it has not yet been investigated how to enact such process models in a Workflow Management System. In this paper, we define a set of requirements that specifies the execution of aspect oriented business process models. We create a Coloured Petri Net specification for the semantics of so-called Aspect Service that fulfils these requirements. Such a service extends the capability of a Workflow Management System with support for execution of aspect oriented business process models. The design specification of the Aspect Service is also inspected through state space analysis.
Resumo:
Queensland University of Technology (QUT) was one of the first universities in Australia to establish an institutional repository. Launched in November 2003, the repository (QUT ePrints) uses the EPrints open source repository software (from Southampton) and has enjoyed the benefit of an institutional deposit mandate since January 2004. Currently (April 2012), the repository holds over 36,000 records, including 17,909 open access publications with another 2,434 publications embargoed but with mediated access enabled via the ‘Request a copy’ button which is a feature of the EPrints software. At QUT, the repository is managed by the library.QUT ePrints (http://eprints.qut.edu.au) The repository is embedded into a number of other systems at QUT including the staff profile system and the University’s research information system. It has also been integrated into a number of critical processes related to Government reporting and research assessment. Internally, senior research administrators often look to the repository for information to assist with decision-making and planning. While some statistics could be drawn from the advanced search feature and the existing download statistics feature, they were rarely at the level of granularity or aggregation required. Getting the information from the ‘back end’ of the repository was very time-consuming for the Library staff. In 2011, the Library funded a project to enhance the range of statistics which would be available from the public interface of QUT ePrints. The repository team conducted a series of focus groups and individual interviews to identify and prioritise functionality requirements for a new statistics ‘dashboard’. The participants included a mix research administrators, early career researchers and senior researchers. The repository team identified a number of business criteria (eg extensible, support available, skills required etc) and then gave each a weighting. After considering all the known options available, five software packages (IRStats, ePrintsStats, AWStats, BIRT and Google Urchin/Analytics) were thoroughly evaluated against a list of 69 criteria to determine which would be most suitable. The evaluation revealed that IRStats was the best fit for our requirements. It was deemed capable of meeting 21 out of the 31 high priority criteria. Consequently, IRStats was implemented as the basis for QUT ePrints’ new statistics dashboards which were launched in Open Access Week, October 2011. Statistics dashboards are now available at four levels; whole-of-repository level, organisational unit level, individual author level and individual item level. The data available includes, cumulative total deposits, time series deposits, deposits by item type, % fulltexts, % open access, cumulative downloads, time series downloads, downloads by item type, author ranking, paper ranking (by downloads), downloader geographic location, domains, internal v external downloads, citation data (from Scopus and Web of Science), most popular search terms, non-search referring websites. The data is displayed in charts, maps and table format. The new statistics dashboards are a great success. Feedback received from staff and students has been very positive. Individual researchers have said that they have found the information to be very useful when compiling a track record. It is now very easy for senior administrators (including the Deputy Vice Chancellor-Research) to compare the full-text deposit rates (i.e. mandate compliance rates) across organisational units. This has led to increased ‘encouragement’ from Heads of School and Deans in relation to the provision of full-text versions.
Resumo:
As one of the first institutional repositories in Australia and the first in the world to have an institution-wide deposit mandate, QUT ePrints has great ‘brand recognition’ within the University (Queensland University of Technology) and beyond. The repository is managed by the library but, over the years, the Library’s repository team has worked closely with other departments (especially the Office of Research and IT Services) to ensure that QUT ePrints was embedded into the business processes and systems our academics use regularly. For example, the repository is the source of the publication information which displays on each academic’s Staff Profile page. The repository pulls in citation data from Scopus and Web of Science and displays the data in the publications records. Researchers can monitor their citations at a glance via the repository ‘View’ which displays all their publications. A trend in recent years has been to populate institutional repositories with publication details imported from the University’s research information system (RIS). The main advantage of the RIS to Repository workflow is that it requires little input from the academics as the publication details are often imported into the RIS from publisher databases. Sadly, this is also its main disadvantage. Generally, only the metadata is imported from the RIS and the lack of engagement by the academics results in very low proportions of records with open access full-texts. Consequently, while we could see the value of integrating the two systems, we were determined to make the repository the entry point for publication data. In 2011, the University funded a project to convert a number of paper-based processes into web-based workflows. This included a workflow to replace the paper forms academics used to complete to report new publications (which were later used by the data entry staff to input the details into the RIS). Publication details and full-text files are uploaded to the repository (by the academics or their nominees). Each night, the repository (QUT ePrints) pushes the metadata for new publications into a holding table. The data is checked by Office of Research staff the next day and then ‘imported’ into the RIS. Publication details (including the repository URLs) are pushed from the RIS to the Staff Profiles system. Previously, academics were required to supply the Office of research with photocopies of their publication (for verification/auditing purposes). The repository is now the source of verification information. Library staff verify the accuracy of the publication details and, where applicable, the peer review status of the work. The verification metadata is included in the information passed to the Office of Research. The RIS at QUT comprises two separate systems built on an Oracle database; a proprietary product (ResearchMaster) plus a locally produced system known as RAD (Research Activity Database). The repository platform is EPrints which is built on a MySQL database. This partly explains why the data is passed from one system to the other via a holding table. The new workflow went live in early April 2012. Tests of the technical integration have all been successful. At the end of the first 12 months, the impact of the new workflow on the proportion of full-texts deposited will be evaluated.
Resumo:
- Preface by Richard T. Watson - Discusses the emerging challenges of designing “green” business processes - Presents tools and methods that organizations can use in order to design and implement environmentally sustainable processes - Provides insights from cases where organizations successfully engaged in more sustainable business practices Green Business Process Management – Towards the Sustainable Enterprise" consolidates the global state-of-the-art knowledge about how business processes can be managed and improved in light of sustainability objectives. Business organizations, a dominant part of our society, have always been a major contributor to the degradation of our natural environment, through the resource consumption, greenhouse emissions, and wastage production associated with their business processes. In order to lessen their impact on the natural environment, organizations must design and implement environmentally sustainable business processes. Finding solutions to this organizational design problem is the key challenge of Green Business Process Management. This book discusses the emerging challenges of designing “green” business processes, presents tools and methods that organizations can use in order to design and implement environmentally sustainable processes, and provides insights from cases where organizations successfully engaged in more sustainable business practices. The book is of relevance to both practitioners and academics who are interested in understanding, designing, and implementing “green” business processes. It also constitutes a valuable resource for students and lecturers in the fields of information systems, management, and sustainable development.
Resumo:
In managing their operations, organizations have traditionally focused on economic imperatives in terms of time, cost, efficiency, and quality. In doing so, they have been a major contributor to environmental degradation caused by re-source consumption, greenhouse emissions, and wastage. As a consequence, or-ganizations are increasingly encouraged to improve their operations also from an ecological perspective, and thus to consider environmental sustainability as an additional management imperative. In order to lessen their impact on the natural environment, organizations must design and implement environmentally sustainable processes, which we call the challenge of Green Business Process Management (Green BPM). This chapter elaborates on the challenge and perspec-tive of Green BPM, and explores the contributions that business process management can provide to creating environmentally sustainable organizations. Our key premise is that business as well as information technology managers need to engage in a process-focused discussion to enable a common, comprehensive understanding of organizational processes, and the process-centered opportunities for making these processes, and ultimately the organization as a process-centric entity, “green.” Through our review of the key BPM capability areas and how they can be framed in terms of environmental sustainability considerations, we provide an overview and introduction to the subsequent chapters in this book.
Resumo:
Non-profit organisations by their very nature are staffed by a variety of different people with a range of backgrounds, experiences and reasons for participation. These differences can lead to “distancing” of certain groups and with little time or money for boundary spanning the organisation can find itself in a fractured state that hampers not just its goal realisation, but its goal determination. Strategic planning is often seen as an expensive, time consuming process that many smaller non-profit organisations can little afford to indulge in. In addition, the ruling elite, whether historical or professional may view the process as unnecessary or threatening. However, strategic planning can offer processes and potential outcomes that non profit organisations can not afford to ignore. This paper provides an analysis through one case study involving a non-profit, health related organisation that moved through a process of strategic planning that ultimately encouraged development and group cohesion through goal identification and determination as well as strategy formulation. The results indicate the importance of valuing the strategic planning process itself rather than the form it takes. Challenging the rulership of the historical or professional elite can be difficult in a non-profit organisation, but diversity of involvement rather than uniformity proved to be a successful strategy. Organisational cohesion through consensus building was the ultimate outcome.
Resumo:
The complex design process of airport terminal needs to support a wide range of changes in operational facilities for both usual and unusual/emergency events. Process model describes how activities within a process are connected and also states logical information flow of the various activities. The traditional design process overlooks the necessity of information flow from the process model to the actual building design, which needs to be considered as a integral part of building design. The current research introduced a generic method to obtain design related information from process model to incorporate with the design process. Appropriate integration of the process model prior to the design process uncovers the relationship exist between spaces and their relevant functions, which could be missed in the traditional design approach. The current paper examines the available Business Process Model (BPM) and generates modified Business Process Model(mBPM) of check-in facilities of Brisbane International airport. The information adopted from mBPM then transform into possible physical layout utilizing graph theory.
Resumo:
A biomass pretreatment process was developed using acidified ionic liquid (IL) solutions containing 10-30% water. Pretreatment of sugarcane bagasse at 130°C for 30min by aqueous 1-butyl-3-methylimidazolium chloride (BMIMCl) solution containing 1.2% HCl resulted in a glucan digestibility of 94-100% after 72h of enzymatic hydrolysis. HCl was found to be a more effective catalyst than H(2)SO(4) or FeCl(3). Increasing acid concentration (from 0.4% to 1.2%) and reaction temperature (from 90 to 130°C) increased glucan digestibility. The glucan digestibility of solid residue obtained with the acidified BMIMCl solution that was re-used for three times was >97%. The addition of water to ILs for pretreatment could significantly reduce IL solvent costs and allow for increased biomass loadings, making the pretreatment by ILs a more economic proposition.
Resumo:
The renovation of biomass waste in the form of date seed waste into activated carbon and biofuel by fixed bed pyrolysis reactor has been focused in this study to obtain gaseous, liquid, and solid products. The date seed in particle form is pyrolysed in an externally heated fixed bed reactor with nitrogen as the carrier gas. The reactor is heated from 400◦C to 600◦C. A maximum liquid yield of 50wt.% and char of 30wt.% are obtained at a reactor bed temperature of 500◦C with a running time of 120 minutes. The oil is found to possess favorable flash point and reasonable density and viscosity. The higher calorific value is found to be 28.636 MJ/kg which is significantly higher than other biomass derived. Decolonization of 85–97% is recorded for the textile effluent and 75–90% for the tannery effluent, in all cases decreasing with temperature increase. Good adsorption capacity of the prepared activated carbon in case of diluted textile and tannery effluent was found.
Resumo:
In this paper, the goal of identifying disease subgroups based on differences in observed symptom profile is considered. Commonly referred to as phenotype identification, solutions to this task often involve the application of unsupervised clustering techniques. In this paper, we investigate the application of a Dirichlet Process mixture (DPM) model for this task. This model is defined by the placement of the Dirichlet Process (DP) on the unknown components of a mixture model, allowing for the expression of uncertainty about the partitioning of observed data into homogeneous subgroups. To exemplify this approach, an application to phenotype identification in Parkinson’s disease (PD) is considered, with symptom profiles collected using the Unified Parkinson’s Disease Rating Scale (UPDRS). Clustering, Dirichlet Process mixture, Parkinson’s disease, UPDRS.
Resumo:
Wayfinding is the process of finding your way to a destination in a familiar or unfamiliar setting using any cues given by the environment. Due to its ubiquity in everyday life, wayfinding appears on the surface to be a simply characterised and understood process, however this very ubiquity and the resulting need to refine and optimise wayfinding has lead to a great number of studies that have revealed that it is in fact a deeply complex exercise. In this paper we examine the motivations for investigating wayfinding, with particular attention being paid to the unique challenges faced in transportation hubs, and discuss the associated principles and factors involved as they have been perceived from different research perspectives.We also review the approaches used to date in the modelling of wayfinding in various contexts. We attempt to draw together the different perspectives applied to wayfinding and postulate the importance of wayfinding and the need to understand this seemingly simple, but concurrently complex, process.
Resumo:
A model has been developed to track the flow of cane constituents through the milling process. While previous models have tracked the flow of fibre, brix and water through the process, this model tracks the soluble and insoluble solid cane components using modelling theory and experiment data, assisting in further understanding the flow of constituents into mixed juice and final bagasse. The work provided an opportunity to understand the factors which affect the distribution of the cane constituents in juice and bagasse. Application of the model should lead to improvements in the overall performance of the milling train.
Resumo:
This month, Jan Recker turns his attention to the technological side of BPM research and education. He engaged in a collaboration with two colleagues at Queensland University, Dr Marcello La Rosa and Eike Bernhard, on an initiative on the development of an advanced BPM technology - an Advanced Process Model Repository called Apromore. In this Column, they use the example of Apromore to showcase how BPM technologies are conceived, designed, developed and applied.
Resumo:
The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information over- lays. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment has much potential in areas of BPM; to engage, pro- vide insight, and to promote collaboration amongst analysts and stakeholders alike. This initial visualization workshop seeks to initiate the development of a high quality international forum to present and discuss research in this field. Via this workshop, we intend to create a community to unify and nurture the development of process visualization topics as a continuing research area.
Resumo:
New substation automation applications, such as sampled value process buses and synchrophasors, require sampling accuracy of 1 µs or better. The Precision Time Protocol (PTP), IEEE Std 1588, achieves this level of performance and integrates well into Ethernet based substation networks. This paper takes a systematic approach to the performance evaluation of commercially available PTP devices (grandmaster, slave, transparent and boundary clocks) from a variety of manufacturers. The ``error budget'' is set by the performance requirements of each application. The ``expenditure'' of this error budget by each component is valuable information for a system designer. The component information is used to design a synchronization system that meets the overall functional requirements. The quantitative performance data presented shows that this testing is effective and informative. Results from testing PTP performance in the presence of sampled value process bus traffic demonstrate the benefit of a ``bottom up'' component testing approach combined with ``top down'' system verification tests. A test method that uses a precision Ethernet capture card, rather than dedicated PTP test sets, to determine the Correction Field Error of transparent clocks is presented. This test is particularly relevant for highly loaded Ethernet networks with stringent timing requirements. The methods presented can be used for development purposes by manufacturers, or by system integrators for acceptance testing. A sampled value process bus was used as the test application for the systematic approach described in this paper. The test approach was applied, components were selected, and the system performance verified to meet the application's requirements. Systematic testing, as presented in this paper, is applicable to a range of industries that use, rather than develop, PTP for time transfer.