803 resultados para composite web services


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them also involves complicated workflows implemented as shell scripts. A new grid middleware system that is well suited to climate modelling applications is presented in this paper. Grid Remote Execution (G-Rex) allows climate models to be deployed as Web services on remote computer systems and then launched and controlled as if they were running on the user's own computer. Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model. G-Rex has a REST architectural style, featuring a Java client program that can easily be incorporated into existing scientific workflow scripts. Some technical details of G-Rex are presented, with examples of its use by climate modellers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the Biodiversity World (BDW) project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to predict past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack interoperability. The BDW system brings all these disparate units together so that the user can combine tools with little thought as to their availability, data formats and interoperability. The current Web Servicesbased Grid environment enables execution of the BDW workflow tasks in remote nodes but with a limited scope. The next step in the evolution of the BDW architecture is to enable workflow tasks to utilise computational resources available within and outside the BDW domain. We describe the present BDW architecture and its transition to a new framework which provides a distributed computational environment for mapping and executing workflows in addition to bringing together heterogeneous resources and analytical tools.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Web Services for Remote Portlets (WSRP) is gaining attention among portal developers and vendors to enable easy development, increased richness in functionality, pluggability, and flexibility of deployment. Whilst currently not supporting all WSRP functionalities, open-source portal frameworks could in future use WSRP Consumers to access remote portlets found from a WSRP Producer registry service. This implies that we need a central registry for the remote portlets and a more expressive WSRP Consumer interface to implement the remote portlet functions. This paper reports on an investigation into a new system architecture, which includes a Web Services repository, registry, and client interface. The Web Services repository holds portlets as remote resource producers. A new data structure for expressing remote portlets is found and published by populating a Universal Description, Discovery and Integration (UDDI) registry. A remote portlet publish and search engine for UDDI has also been developed. Finally, a remote portlet client interface was developed as a Web application. The client interface supports remote portlet features, as well as window status and mode functions. Copyright (c) 2007 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, life event approach has been widely used by governments all over the world for designing and providing web services to citizens through their e-government portals. Despite the wide usage of this approach, there is still a challenge of how to use this approach to design e-government portals in order to automatically provide personalised services to citizens. We propose a conceptual framework for e-government service provision based on life event approach and the use of citizen profile to capture the citizen needs, since the process of finding Web services from a government-to-citizen (G2C) system involves understanding the citizens’ needs and demands, selecting the relevant services, and delivering services that matches the requirements. The proposed framework that incorporates the citizen profile is based on three components that complement each other, namely, anticipatory life events, non-anticipatory life events and recurring services.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Service discovery is a critical task in service-oriented architectures such as the Grid and Web Services. In this paper, we study a semantics enabled service registry, GRIMOIRES, from a performance perspective. GRIMOIRES is designed to be the registry for myGrid and the OMII software distribution. We study the scalability of GRIMOIRES against the amount of information that has been published into it. The methodology we use and the data we present are helpful for researchers to understand the performance characteristics of the registry and, more generally, of semantics enabled service discovery. Based on this experimentation, we claim that GRIMOIRES is an efficient semantics-aware service discovery engine.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Service discovery is a critical task in service-oriented architectures such as the Grid and Web Services. In this paper, we study a semantics enabled service registry, GRIMOIRES, from a performance perspective. GRIMOIRES is designed to be the registry for myGrid and the OMII software distribution. We study the scalability of GRIMOIRES against the amount of information that has been published into it. The methodology we use and the data we present are helpful for researchers to understand the performance characteristics of the registry and, more generally, of semantics enabled service discovery. Based on this experimentation, we claim that GRIMOIRES is an efficient semantics-aware service discovery engine.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The authors take a broad view that ultimately Grid- or Web-services must be located via personalised, semantic-rich discovery processes. They argue that such processes must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. Examples of such metadata are reliability metrics, quality of service data, or semantic service description markup. This paper presents UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. They also discuss the use of a rich, graph-based RDF query language for syntactic queries on this data. Finally, they analyse the performance of each of these contributions in our implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We take a broad view that ultimately Grid- or Web-services must be located via personalised, semantic-rich discovery processes. We argue that such processes must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. Examples of such metadata are reliability metrics, quality of service data, or semantic service description markup. This paper presents UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. We also discuss the use of a rich, graph-based RDF query language for syntactic queries on this data. Finally, we analyse the performance of each of these contributions in our implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent work has begun exploring the characterization and utilization of provenance in systems based on the Service Oriented Architecture (such as Web Services and Grid based environments). One of the salient issues related to provenance use within any given system is its security. In a broad sense, security requirements arise within any data archival and retrieval system, however provenance presents unique requirements of its own. These requirements are additionally dependent on the architectural and environmental context that a provenance system operates in. We seek to analyze the security considerations pertaining to a Service Oriented Architecture based provenance system. Towards this end, we describe the components of such a system and illustrate the security considerations that arise within it. Concurrently, we outline possible approaches to address them.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Determining the provenance of data, i.e. the process that led to that data, is vital in many disciplines. For example, in science, the process that produced a given result must be demonstrably rigorous for the result to be deemed reliable. A provenance system supports applications in recording adequate documentation about process executions to answer queries regarding provenance, and provides functionality to perform those queries. Several provenance systems are being developed, but all focus on systems in which the components are textitreactive, for example Web Services that act on the basis of a request, job submission system, etc. This limitation means that questions regarding the motives of autonomous actors, or textitagents, in such systems remain unanswerable in the general case. Such questions include: who was ultimately responsible for a given effect, what was their reason for initiating the process and does the effect of a process match what was intended to occur by those initiating the process? In this paper, we address this limitation by integrating two solutions: a generic, re-usable framework for representing the provenance of data in service-oriented architectures and a model for describing the goal-oriented delegation and engagement of agents in multi-agent systems. Using these solutions, we present algorithms to answer common questions regarding responsibility and success of a process and evaluate the approach with a simulated healthcare example.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic composition of services provides the ability to build complex distributed applications at run time by combining existing services, thus coping with a large variety of complex requirements that cannot be met by individual services alone. However, with the increasing amount of available services that differ in granularity (amount of functionality provided) and qualities, selecting the best combination of services becomes very complex. In response, this paper addresses the challenges of service selection, and makes a twofold contribution. First, a rich representation of compositional planning knowledge is provided, allowing the expression of multiple decompositions of tasks at arbitrary levels of granularity. Second, two distinct search space reduction techniques are introduced, the application of which, prior to performing service selection, results in significant improvement in selection performance in terms of execution time, which is demonstrated via experimental results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este relatório consolida os trabalhos de pesquisa, desenvolvidos entre abril de 2005 e abril de 2006, sobre o estado de adoção e oportunidades para uso de novas tecnologias de informação em processos de governo. A ampliação de fronteiras, para além dos limites tradicionais das organizações, traz uma nova e mais forte demanda por flexibilidade que possibilite o tratamento integrado de organismos de diferentes constituições, arquiteturas e processos operacionais, sem falar nos diferentes sistemas de informações. Isto é ainda mais importante nas organizações públicas. Por outro lado, uma das principais características negativas dos órgãos públicos é a morosidade e a burocracia nos processos administrativos e de atendimento ao cidadão. A falta de uma visão tecnológica moderna, isto é, a falta de um Plano Diretor de Tecnologia da Informação (PDTI) voltada para novas soluções, como é o caso do BPM, alinhada à falta de integração entre os sistemas e processos, faz com que muitos órgãos governamentais estejam caminhando na contramão do desenvolvimento tecnológico. Este projeto de pesquisa reveste-se, portanto, de alto interesse, pois focaliza as possibilidades e impactos da adoção das novas tecnologias orientadas a processos e web services (BPM - Business Process Management e BPMS - Business Process Management Systems) na área governamental, bastante desprovida de soluções integradas de serviços aos cidadãos e empresas. Estas novas tecnologias trazem paradigmas completamente diferentes dos até aqui adotados na implementação de sistemas de informações e automação de processos. Apesar das dificuldades inerentes ao tratamento de um tema complexo e novo, mais ainda em organismos governamentais, acreditamos ter desenvolvido um trabalho bastante aprofundado, atendendo aos objetivos estabelecidos no plano original, com os necessários acertos de rota e foco dos trabalhos. Cremos, também, que este trabalho estabelece uma referência relevante no conhecimento relacionados à melhoria de processos de governo, com base em novas tecnologias. Como sub-produtos planejados e realizados, inseridos no caderno de anexos a este relatório, estão conteúdos já desenvolvidos para a edição um ou dois livros sobre o tema, diversos artigos produzidos, além de diversos eventos realizados na EAESP, envolvendo o tema do projeto, que proporcionaram a oportunidade de excelentes trocas de experiências. Este relatório, apresentado de forma objetiva e sintética, focalizando somente os principais aspectos tratados, é complementado por um extenso conteúdo complementar, entregue em um caderno de Anexos.