14 resultados para Workflow
em Aston University Research Archive
Resumo:
Suggests that simulation of the workflow component of a computer supported co-operative work (CSCW) system has the potential to reduce the costs of system implementation, while at the same time improving the quality of the delivered system. Demonstrates the value of being able to assess the frequency and volume of workflow transactions using a case study of CSCW software developed for estate agency co-workers in which a model was produced based on a discrete-event simulation approach with implementation on a spreadsheet platform.
Resumo:
This paper discusses demand and supply chain management and examines how artificial intelligence techniques and RFID technology can enhance the responsiveness of the logistics workflow. This proposed system is expected to have a significant impact on the performance of logistics networks by virtue of its capabilities to adapt unexpected supply and demand changes in the volatile marketplace with the unique feature of responsiveness with the advanced technology, Radio Frequency Identification (RFID). Recent studies have found that RFID and artificial intelligence techniques drive the development of total solution in logistics industry. Apart from tracking the movement of the goods, RFID is able to play an important role to reflect the inventory level of various distribution areas. In today’s globalized industrial environment, the physical logistics operations and the associated flow of information are the essential elements for companies to realize an efficient logistics workflow scenario. Basically, a flexible logistics workflow, which is characterized by its fast responsiveness in dealing with customer requirements through the integration of various value chain activities, is fundamental to leverage business performance of enterprises. The significance of this research is the demonstration of the synergy of using a combination of advanced technologies to form an integrated system that helps achieve lean and agile logistics workflow.
Resumo:
In order to survive in the increasingly customer-oriented marketplace, continuous quality improvement marks the fastest growing quality organization’s success. In recent years, attention has been focused on intelligent systems which have shown great promise in supporting quality control. However, only a small number of the currently used systems are reported to be operating effectively because they are designed to maintain a quality level within the specified process, rather than to focus on cooperation within the production workflow. This paper proposes an intelligent system with a newly designed algorithm and the universal process data exchange standard to overcome the challenges of demanding customers who seek high-quality and low-cost products. The intelligent quality management system is equipped with the ‘‘distributed process mining” feature to provide all levels of employees with the ability to understand the relationships between processes, especially when any aspect of the process is going to degrade or fail. An example of generalized fuzzy association rules are applied in manufacturing sector to demonstrate how the proposed iterative process mining algorithm finds the relationships between distributed process parameters and the presence of quality problems.
Resumo:
In this paper, a co-operative distributed process mining system (CDPMS) is developed to streamline the workflow along the supply chain in order to offer shorter delivery times, more flexibility and higher customer satisfaction with learning ability. The proposed system is equipped with the ‘distributed process mining’ feature which is used to discover the hidden relationships among each working decision in distributed manner. This method incorporates the concept of data mining and knowledge refinement into decision making process for ensuring ‘doing the right things’ within the workflow. An example of implementation is given, based on the case of slider manufacturer.
Resumo:
The goal of evidence-based medicine is to uniformly apply evidence gained from scientific research to aspects of clinical practice. In order to achieve this goal, new applications that integrate increasingly disparate health care information resources are required. Access to and provision of evidence must be seamlessly integrated with existing clinical workflow and evidence should be made available where it is most often required - at the point of care. In this paper we address these requirements and outline a concept-based framework that captures the context of a current patient-physician encounter by combining disease and patient-specific information into a logical query mechanism for retrieving relevant evidence from the Cochrane Library. Returned documents are organized by automatically extracting concepts from the evidence-based query to create meaningful clusters of documents which are presented in a manner appropriate for point of care support. The framework is currently being implemented as a prototype software agent that operates within the larger context of a multi-agent application for supporting workflow management of emergency pediatric asthma exacerbations. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
Urinary proteomics is emerging as a powerful non-invasive tool for diagnosis and monitoring of variety of human diseases. We tested whether signatures of urinary polypeptides can contribute to the existing biomarkers for coronary artery disease (CAD). We examined a total of 359 urine samples from 88 patients with severe CAD and 282 controls. Spot urine was analyzed using capillary electrophoresis on-line coupled to ESI-TOF-MS enabling characterization of more than 1000 polypeptides per sample. In a first step a "training set" for biomarker definition was created. Multiple biomarker patterns clearly distinguished healthy controls from CAD patients, and we extracted 15 peptides that define a characteristic CAD signature panel. In a second step, the ability of the CAD-specific panel to predict the presence of CAD was evaluated in a blinded study using a "test set." The signature panel showed sensitivity of 98% (95% confidence interval, 88.7-99.6) and 83% specificity (95% confidence interval, 51.6-97.4). Furthermore the peptide pattern significantly changed toward the healthy signature correlating with the level of physical activity after therapeutic intervention. Our results show that urinary proteomics can identify CAD patients with high confidence and might also play a role in monitoring the effects of therapeutic interventions. The workflow is amenable to clinical routine testing suggesting that non-invasive proteomics analysis can become a valuable addition to other biomarkers used in cardiovascular risk assessment.
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
A quasi-biotic model of knowledge evolution has been applied to manufacturing technology capability development which includes product design and development and manufacturing process/workflow improvement. The concepts of “knowledge genes” and “knowledge body” are introduced to explain the evolution of technological capability. It is shown that knowledge development within the enterprise happens as a result of interactions between an enterprise’s internal knowledge and that acquired from external sources catalysed by: (a) internal mechanisms, recources and incentives, and (b) actions and policies of external agencies. A matrix specifying factors contributing to knowledge development and types of manufacturing capabilities (product design, equipment development or use, and workflow) is developed to explain technological knowledge development. The case studies of Tianjin Pipe Corporation (TPCO) and Tianjin Tianduan Press Co. are presented to illustrate the application of the matrix.
Resumo:
Intelligent environments aim at supporting the user in executing her everyday tasks, e.g. by guiding her through a maintenance or cooking procedure. This requires a machine processable representation of the tasks for which workflows have proven an efficient means. The increasing number of available sensors in intelligent environments can facilitate the execution of workflows. The sensors can help to recognize when a user has finished a step in the workflow and thus to automatically proceed to the next step. This can heavily reduce the amount of required user interaction. However, manually specifying the conditions for triggering the next step in a workflow is very cumbersome and almost impossible for environments which are not known at design time. In this paper, we present a novel approach for learning and adapting these conditions from observation. We show that the learned conditions can even outperform the quality as conditions manually specified by workflow experts. Thus, the presented approach is very well suited for automatically adapting workflows in intelligent environments and can in that way increase the efficiency of the workflow execution. © 2011 IEEE.
Resumo:
Ubiquitous computing requires lightweight approaches to coordinating tasks distributed across smart devices. We are currently developing a semantic workflow modelling approach that blends the proven robustness of XPDL with semantics to support proactive behaviour. We illustrate the potential of the model through an example based on mixing a dry martini.
Resumo:
This thesis provides a set of tools for managing uncertainty in Web-based models and workflows.To support the use of these tools, this thesis firstly provides a framework for exposing models through Web services. An introduction to uncertainty management, Web service interfaces,and workflow standards and technologies is given, with a particular focus on the geospatial domain.An existing specification for exposing geospatial models and processes, theWeb Processing Service (WPS), is critically reviewed. A processing service framework is presented as a solutionto usability issues with the WPS standard. The framework implements support for Simple ObjectAccess Protocol (SOAP), Web Service Description Language (WSDL) and JavaScript Object Notation (JSON), allowing models to be consumed by a variety of tools and software. Strategies for communicating with models from Web service interfaces are discussed, demonstrating the difficultly of exposing existing models on the Web. This thesis then reviews existing mechanisms for uncertainty management, with an emphasis on emulator methods for building efficient statistical surrogate models. A tool is developed to solve accessibility issues with such methods, by providing a Web-based user interface and backend to ease the process of building and integrating emulators. These tools, plus the processing service framework, are applied to a real case study as part of the UncertWeb project. The usability of the framework is proved with the implementation of aWeb-based workflow for predicting future crop yields in the UK, also demonstrating the abilities of the tools for emulator building and integration. Future directions for the development of the tools are discussed.
Resumo:
We argue that, for certain constrained domains, elaborate model transformation technologies-implemented from scratch in general-purpose programming languages-are unnecessary for model-driven engineering; instead, lightweight configuration of commercial off-the-shelf productivity tools suffices. In particular, in the CancerGrid project, we have been developing model-driven techniques for the generation of software tools to support clinical trials. A domain metamodel captures the community's best practice in trial design. A scientist authors a trial protocol, modelling their trial by instantiating the metamodel; customized software artifacts to support trial execution are generated automatically from the scientist's model. The metamodel is expressed as an XML Schema, in such a way that it can be instantiated by completing a form to generate a conformant XML document. The same process works at a second level for trial execution: among the artifacts generated from the protocol are models of the data to be collected, and the clinician conducting the trial instantiates such models in reporting observations-again by completing a form to create a conformant XML document, representing the data gathered during that observation. Simple standard form management tools are all that is needed. Our approach is applicable to a wide variety of information-modelling domains: not just clinical trials, but also electronic public sector computing, customer relationship management, document workflow, and so on. © 2012 Springer-Verlag.
Resumo:
This thesis objective is to discover “How are informal decisions reached by screeners when filtering out undesirable job applications?” Grounded theory techniques were employed in the field to observe and analyse informal decisions at the source by screeners in three distinct empirical studies. Whilst grounded theory provided the method for case and cross-case analysis, literature from academic and non-academic sources was evaluated and integrated to strengthen this research and create a foundation for understanding informal decisions. As informal decisions in early hiring processes have been under researched, this thesis contributes to current knowledge in several ways. First, it locates the Cycle of Employment which enhances Robertson and Smith’s (1993) Selection Paradigm through the integration of stages that individuals occupy whilst seeking employment. Secondly, a general depiction of the Workflow of General Hiring Processes provides a template for practitioners to map and further develop their organisational processes. Finally, it highlights the emergence of the Locality Effect, which is a geographically driven heuristic and bias that can significantly impact recruitment and informal decisions. Although screeners make informal decisions using multiple variables, informal decisions are made in stages as evidence in the Cycle of Employment. Moreover, informal decisions can be erroneous as a result of a majority and minority influence, the weighting of information, the injection of inappropriate information and criteria, and the influence of an assessor. This thesis considers these faults and develops a basic framework of understanding informal decisions to which future research can be launched.
Resumo:
The manufacturing industry faces many challenges such as reducing time-to-market and cutting costs. In order to meet these increasing demands, effective methods are need to support the early product development stages by bridging the gap of communicating early design ideas and the evaluation of manufacturing performance. This paper introduces methods of linking design and manufacturing domains using disparate technologies. The combined technologies include knowledge management supporting for product lifecycle management systems, Enterprise Resource Planning (ERP) systems, aggregate process planning systems, workflow management and data exchange formats. A case study has been used to demonstrate the use of these technologies, illustrated by adding manufacturing knowledge to generate alternative early process plan which are in turn used by an ERP system to obtain and optimise a rough-cut capacity plan. Copyright © 2010 Inderscience Enterprises Ltd.