10 resultados para Workflow

em WestminsterResearch - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

E-scientists want to run their scientific experiments on Distributed Computing Infrastructures (DCI) to be able to access large pools of resources and services. To run experiments on these infrastructures requires specific expertise that e-scientists may not have. Workflows can hide resources and services as a virtualization layer providing a user interface that e-scientists can use. There are many workflow systems used by research communities but they are not interoperable. To learn a workflow system and create workflows in this workflow system may require significant efforts from e-scientists. Considering these efforts it is not reasonable to expect that research communities will learn new workflow systems if they want to run workflows developed in other workflow systems. The solution is to create workflow interoperability solutions to allow workflow sharing. The FP7 Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs (SHIWA) project developed two interoperability solutions to support workflow sharing: Coarse-Grained Interoperability (CGI) and Fine-Grained Interoperability (FGI). The project created the SHIWA Simulation Platform (SSP) to implement the Coarse-Grained Interoperability approach as a production-level service for research communities. The paper describes the CGI approach and how it enables sharing and combining existing workflows into complex applications and run them on Distributed Computing Infrastructures. The paper also outlines the architecture, components and usage scenarios of the simulation platform.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the impact of cloud computing and the use of GPUs on the performance of Autodock and Gromacs respectively. Cloud computing was applicable to reducing the ‘‘tail’’ seen in running Autodock on desktop grids and the GPU version of Gromacs showed significant improvement over the CPU version. A large (200,000 compounds) library of small molecules, seven sialic acid analogues of the putative substrate and 8000 sugar molecules were converted into pdbqt format and used to interrogate the Trichomonas vaginalis neuraminidase using Autodock Vina. Good binding energy was noted for some of the small molecules (~-9 kcal/mol), but the sugars bound with affinity of less than -7.6 kcal/mol. The screening of the sugar library resulted in a ‘‘top hit’’ with a-2,3-sialyllacto-N-fucopentaose III, a derivative of the sialyl Lewisx structure and a known substrate of the enzyme. Indeed in the top 100 hits 8 were related to this structure. A comparison of Autodock Vina and Autodock 4.2 was made for the high affinity small molecules and in some cases the results were superimposable whereas in others, the match was less good. The validation of this work will require extensive ‘‘wet lab’’ work to determine the utility of the workflow in the prediction of potential enzyme inhibitors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A vast majority of scientific grid applications are either parameter sweep applications or a significant subpart of these applications belong to class of parameter sweep activities. The paper describes a new graphical workflow language in which any node of the DAG-based workflow can be a parameter sweep node and the execution of these nodes are transparently executed either in service grids or in desktop grids depending on the computational complexity of the workflow node. The new concept is supported by the CancerGrid portal that has been established for a chemist community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Researchers want to run scientific experiments focusing on their disciplines. They do not want to know how and where the experiments are executed. Science gateways hide details by coordinating the execution of experiments using different infrastructures and workflow systems. ER-flow/SHIWA and SCI-BUS project developed repositories to share artefacts such as applications, portlets, workflows, etc. inside and among research communities. Sharing artefacts in re-positories enable gateway developers to reuse them when building a new gateway and/or creating a new application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Researchers want to analyse Health Care data which may requires large pools of compute and data resources. To have them they need access to Distributed Computing Infrastructures (DCI). To use them it requires expertise which researchers may not have. Workflows can hide infrastructures. There are many workflow systems but they are not interoperable. To learn a workflow system and create workflows in a workflow system may require significant effort. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows of other workflow systems. As a result, the lack of interoperability prevents workflow sharing and a vast amount of research efforts is wasted. The FP7 Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs (SHIWA) project developed the Coarse-Grained Interoperability (CGI) to enable workflow sharing. The project created the SHIWA Simulation Platform (SSP) to support CGI as a production-level service. The paper describes how the CGI approach can be used for analysis and simulation in Health Care.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing Workflow Management Systems (WFMSs) follow a pragmatic approach. They often use a proprietary modelling language with an intuitive graphical layout. However the underlying semantics lack a formal foundation. As a consequence, analysis issues, such as proving correctness i.e. soundness and completeness, and reliable execution are not supported at design level. This project will be using an applied ontology approach by formally defining key terms such as process, sub-process, action/task based on formal temporal theory. Current business process modelling (BPM) standards such as Business Process Modelling Notation (BPMN) and Unified Modelling Language (UML) Activity Diagram (AD) model their constructs with no logical basis. This investigation will contribute to the research and industry by providing a framework that will provide grounding for BPM to reason and represent a correct business process (BP). This is missing in the current BPM domain, and may result in reduction of the design costs and avert the burden of redundant terms used by the current standards. A graphical tool will be introduced which will implement the formal ontology defined in the framework. This new tool can be used both as a modelling tool and at the same time will serve the purpose of validating the model. This research will also fill the existing gap by providing a unified graphical representation to represent a BP in a logically consistent manner for the mainstream modelling standards in the fields of business and IT. A case study will be conducted to analyse a catalogue of existing ‘patient pathways’ i.e. processes, of King’s College Hospital NHS Trust including current performance statistics. Following the application of the framework, a mapping will be conducted, and new performance statistics will be collected. A cost/benefits analysis report will be produced comparing the results of the two approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing Workflow Management Systems (WFMSs) follow a pragmatic approach. They often use a proprietary modelling language with an intuitive graphical layout. However the underlying semantics lack a formal foundation. As a consequence, analysis issues, such as proving correctness i.e. soundness and completeness, and reliable execution are not supported at design level. This project will be using an applied ontology approach by formally defining key terms such as process, sub-process, action/task based on formal temporal theory. Current business process modelling (BPM) standards such as Business Process Modelling Notation (BPMN) and Unified Modelling Language (UML) Activity Diagram (AD) model their constructs with no logical basis. This investigation will contribute to the research and industry by providing a framework that will provide grounding for BPM to reason and represent a correct business process (BP). This is missing in the current BPM domain, and may result in reduction of the design costs and avert the burden of redundant terms used by the current standards. A graphical tool will be introduced which will implement the formal ontology defined in the framework. This new tool can be used both as a modelling tool and at the same time will serve the purpose of validating the model. This research will also fill the existing gap by providing a unified graphical representation to represent a BP in a logically consistent manner for the mainstream modelling standards in the fields of business and IT. A case study will be conducted to analyse a catalogue of existing ‘patient pathways’ i.e. processes, of King’s College Hospital NHS Trust including current performance statistics. Following the application of the framework, a mapping will be conducted, and new performance statistics will be collected. A cost/benefits analysis report will be produced comparing the results of the two approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scientific workflows orchestrate the execution of complex experiments frequently using distributed computing platforms. Meta-workflows represent an emerging type of such workflows which aim to reuse existing workflows from potentially different workflow systems to achieve more complex and experimentation minimizing workflow design and testing efforts. Workflow interoperability plays a profound role in achieving this objective. This paper is focused at fostering interoperability across meta-workflows that combine workflows of different workflow systems from diverse scientific domains. This is achieved by formalizing definitions of meta-workflow and its different types to standardize their data structures used to describe workflows to be published and shared via public repositories. The paper also includes thorough formalization of two workflow interoperability approaches based on this formal description: the coarse-grained and fine-grained workflow interoperability approach. The paper presents a case study from Astrophysics which successfully demonstrates the use of the concepts of meta-workflows and workflow interoperability within a scientific simulation platform.