994 resultados para WORKFLOW SYSTEMS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Researchers want to run scientific experiments focusing on their disciplines. They do not want to know how and where the experiments are executed. Science gateways hide details by coordinating the execution of experiments using different infrastructures and workflow systems. ER-flow/SHIWA and SCI-BUS project developed repositories to share artefacts such as applications, portlets, workflows, etc. inside and among research communities. Sharing artefacts in re-positories enable gateway developers to reuse them when building a new gateway and/or creating a new application.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Researchers want to analyse Health Care data which may requires large pools of compute and data resources. To have them they need access to Distributed Computing Infrastructures (DCI). To use them it requires expertise which researchers may not have. Workflows can hide infrastructures. There are many workflow systems but they are not interoperable. To learn a workflow system and create workflows in a workflow system may require significant effort. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows of other workflow systems. As a result, the lack of interoperability prevents workflow sharing and a vast amount of research efforts is wasted. The FP7 Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs (SHIWA) project developed the Coarse-Grained Interoperability (CGI) to enable workflow sharing. The project created the SHIWA Simulation Platform (SSP) to support CGI as a production-level service. The paper describes how the CGI approach can be used for analysis and simulation in Health Care.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The service-oriented approach to performing distributed scientific research is potentially very powerful but is not yet widely used in many scientific fields. This is partly due to the technical difficulties involved in creating services and workflows and the inefficiency of many workflow systems with regard to handling large datasets. We present the Styx Grid Service, a simple system that wraps command-line programs and allows them to be run over the Internet exactly as if they were local programs. Styx Grid Services are very easy to create and use and can be composed into powerful workflows with simple shell scripts or more sophisticated graphical tools. An important feature of the system is that data can be streamed directly from service to service, significantly increasing the efficiency of workflows that use large data volumes. The status and progress of Styx Grid Services can be monitored asynchronously using a mechanism that places very few demands on firewalls. We show how Styx Grid Services can interoperate with with Web Services and WS-Resources using suitable adapters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current scientific applications are often structured as workflows and rely on workflow systems to compile abstract experiment designs into enactable workflows that utilise the best available resources. The automation of this step and of the workflow enactment, hides the details of how results have been produced. Knowing how compilation and enactment occurred allows results to be reconnected with the experiment design. We investigate how provenance helps scientists to connect their results with the actual execution that took place, their original experiment and its inputs and parameters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The provision of Human Resource (HR), especially payroll, is a core function in every organization. Previously, providers of HR/payroll have offered their services to their clients via conventional modes of communication, such as telephones, facsimile, and courier services. In recent years, with the advent of the Internet and the emergence of web-based electronic commerce, there has been a rise in the adoption of web-based technology and information
systems by service providers, thereby enabling them to interact with their clients through this medium. This development necessitates the use of web-based user interfaces as workspaces between the HR/payroll providers and their clients, and thus, raises certain concerns that determine the effectiveness of web-based workflow systems. These concerns, related to the use of web interfaces, form the basis of the patterns discussed in this paper

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client’s site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, a variety of systems have been developed that export the workflows used to analyze data and make them part of published articles. We argue that the workflows that are published in current approaches are dependent on the specific codes used for execution, the specific workflow system used, and the specific workflow catalogs where they are published. In this paper, we describe a new approach that addresses these shortcomings and makes workflows more reusable through: 1) the use of abstract workflows to complement executable workflows to make them reusable when the execution environment is different, 2) the publication of both abstract and executable workflows using standards such as the Open Provenance Model that can be imported by other workflow systems, 3) the publication of workflows as Linked Data that results in open web accessible workflow repositories. We illustrate this approach using a complex workflow that we re-created from an influential publication that describes the generation of 'drugomes'.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Workflow systems have traditionally focused on the so-called production processes which are characterized by pre-definition, high volume, and repetitiveness. Recently, the deployment of workflow systems in non-traditional domains such as collaborative applications, e-learning and cross-organizational process integration, have put forth new requirements for flexible and dynamic specification. However, this flexibility cannot be offered at the expense of control, a critical requirement of business processes. In this paper, we will present a foundation set of constraints for flexible workflow specification. These constraints are intended to provide an appropriate balance between flexibility and control. The constraint specification framework is based on the concept of pockets of flexibility which allows ad hoc changes and/or building of workflows for highly flexible processes. Basically, our approach is to provide the ability to execute on the basis of a partially specified model, where the full specification of the model is made at runtime, and may be unique to each instance. The verification of dynamically built models is essential. Where as ensuring that the model conforms to specified constraints does not pose great difficulty, ensuring that the constraint set itself does not carry conflicts and redundancy is an interesting and challenging problem. In this paper, we will provide a discussion on both the static and dynamic verification aspects. We will also briefly present Chameleon, a prototype workflow engine that implements these concepts. (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Scientific workflows orchestrate the execution of complex experiments frequently using distributed computing platforms. Meta-workflows represent an emerging type of such workflows which aim to reuse existing workflows from potentially different workflow systems to achieve more complex and experimentation minimizing workflow design and testing efforts. Workflow interoperability plays a profound role in achieving this objective. This paper is focused at fostering interoperability across meta-workflows that combine workflows of different workflow systems from diverse scientific domains. This is achieved by formalizing definitions of meta-workflow and its different types to standardize their data structures used to describe workflows to be published and shared via public repositories. The paper also includes thorough formalization of two workflow interoperability approaches based on this formal description: the coarse-grained and fine-grained workflow interoperability approach. The paper presents a case study from Astrophysics which successfully demonstrates the use of the concepts of meta-workflows and workflow interoperability within a scientific simulation platform.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing is establishing itself as the latest computing paradigm in recent years. As doing science in the cloud is becoming a reality, scientists are now able to access public cloud centers and employ high-performance computing resources to run scientific applications. However, due to the dynamic nature of the cloud environment, the usability of scientific cloud workflow systems can be significantly deteriorated if without effective service quality assurance strategies. Specifically, workflow temporal verification as the major approach for workflow temporal QoS (Quality of Service) assurance plays a critical role in the on-time completion of large-scale scientific workflows. Great efforts have been dedicated to the area of workflow temporal verification in recent years and it is high time that we should define the key research issues for scientific cloud workflows in order to keep our research on the right track. In this paper, we systematically investigate this problem and present four key research issues based on the introduction of a generic temporal verification framework. Meanwhile, state-of-the-art solutions for each research issue and open challenges are also presented. Finally, SwinDeW-V, an ongoing research project on temporal verification as part of our SwinDeW-C cloud workflow system, is also demonstrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Massive computation power and storage capacity of cloud computing systems allow scientists to deploy computation and data intensive applications without infrastructure investment, where large application data sets can be stored in the cloud. Based on the pay-as-you-go model, storage strategies and benchmarking approaches have been developed for cost-effectively storing large volume of generated application data sets in the cloud. However, they are either insufficiently cost-effective for the storage or impractical to be used at runtime. In this paper, toward achieving the minimum cost benchmark, we propose a novel highly cost-effective and practical storage strategy that can automatically decide whether a generated data set should be stored or not at runtime in the cloud. The main focus of this strategy is the local-optimization for the tradeoff between computation and storage, while secondarily also taking users' (optional) preferences on storage into consideration. Both theoretical analysis and simulations conducted on general (random) data sets as well as specific real world applications with Amazon's cost model show that the cost-effectiveness of our strategy is close to or even the same as the minimum cost benchmark, and the efficiency is very high for practical runtime utilization in the cloud.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent years several scientific Workflow Management Systems (WfMSs) have been developed with the aim to automate large scale scientific experiments. As yet, many offerings have been developed, but none of them has been promoted as an accepted standard. In this paper we propose a pattern-based evaluation of three among the most widely used scientific WfMSs: Kepler, Taverna and Triana. The aim is to compare them with traditional business WfMSs, emphasizing the strengths and deficiencies of both systems. Moreover, a set of new patterns is defined from the analysis of the three considered systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Knowledge management is critical for the success of virtual communities, especially in the case of distributed working groups. A representative example of this scenario is the distributed software development, where it is necessary an optimal coordination to avoid common problems such as duplicated work. In this paper the feasibility of using the workflow technology as a knowledge management system is discussed, and a practical use case is presented. This use case is an information system that has been deployed within a banking environment. It combines common workflow technology with a new conception of the interaction among participants through the extension of existing definition languages.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015