25 resultados para Grid Systems
em CentAUR: Central Archive University of Reading - UK
Resumo:
The performance benefit when using Grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effect of the synchronization overhead, mainly due to the high variability of completion times of the different tasks, which, in turn, is due to the large heterogeneity of Grid nodes. For this reason, it is important to have models which capture the performance of such systems. In this paper we describe a queueing-network-based performance model able to accurately analyze Grid architectures, and we use the model to study a real parallel application executed in a Grid. The proposed model improves the classical modelling techniques and highlights the impact of resource heterogeneity and network latency on the application performance.
Resumo:
The performance benefit when using grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effects of synchronization overheads, mainly due to the high variability in the execution times of the different tasks, which, in turn, is accentuated by the large heterogeneity of grid nodes. In this paper we design hierarchical, queuing network performance models able to accurately analyze grid architectures and applications. Thanks to the model results, we introduce a new allocation policy based on a combination between task partitioning and task replication. The models are used to study two real applications and to evaluate the performance benefits obtained with allocation policies based on task replication.
Resumo:
Traditional resource management has had as its main objective the optimisation of throughput, based on pa- rameters such as CPU, memory, and network bandwidth. With the appearance of Grid Markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The SORMA project aims to allow resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA’s motivation is to achieve efficient resource utilisation by maximising revenue for resource providers, and minimising the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that desired Quality of Service levels meet the expectations of market participants. This paper explains the proposed use of an Economically Enhanced Resource Manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximisation across multiple Service Level Agreements.
Resumo:
Smart grid research has tended to be compartmentalised, with notable contributions from economics, electrical engineering and science and technology studies. However, there is an acknowledged and growing need for an integrated systems approach to the evaluation of smart grid initiatives. The capacity to simulate and explore smart grid possibilities on various scales is key to such an integrated approach but existing models – even if multidisciplinary – tend to have a limited focus. This paper describes an innovative and flexible framework that has been developed to facilitate the simulation of various smart grid scenarios and the interconnected social, technical and economic networks from a complex systems perspective. The architecture is described and related to realised examples of its use, both to model the electricity system as it is today and to model futures that have been envisioned in the literature. Potential future applications of the framework are explored, along with its utility as an analytic and decision support tool for smart grid stakeholders.
Resumo:
G-Rex is light-weight Java middleware that allows scientific applications deployed on remote computer systems to be launched and controlled as if they are running on the user's own computer. G-Rex is particularly suited to ocean and climate modelling applications because output from the model is transferred back to the user while the run is in progress, which prevents the accumulation of large amounts of data on the remote cluster. The G-Rex server is a RESTful Web application that runs inside a servlet container on the remote system, and the client component is a Java command line program that can easily be incorporated into existing scientific work-flow scripts. The NEMO and POLCOMS ocean models have been deployed as G-Rex services in the NERC Cluster Grid, and G-Rex is the core grid middleware in the GCEP and GCOMS e-science projects.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
Twenty-five small soil-filled perspex boxes arranged in a square, with dwarf sunflowers growing in them, were used to study micro-scale advection. Hydrological heterogeneity was introduced by applying two different amounts of irrigation water (low-irrigation, L, versus high-irrigation, H). The nine central boxes (4 H, 4 L and I bare box) were precision weighing lysimeters, yielding diurnal measurements of evaporation. After the onset of soil water stress, a large difference in latent heat flux (up to 4-fold) was observed between the lysimeters of the H and L treatments, mainly caused by large differences between H and L stomatal conductance values. This resulted in micro-advection, causing H soil-sunflower systems to evaporate well above equilibrium latent heat flux. The occurrence of micro-advective enhancement was reflected in large values of the Priestley-Taylor constant (often larger than 2.0) and generally negative values of sensible heat flux for the H treatment. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Distributed computing paradigms for sharing resources such as Clouds, Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. While there are some success stories such as PlanetLab, OneLab, BOINC, BitTorrent, and SETI@home, a widespread use of these technologies for business applications has not yet been achieved. In a business environment, mechanisms are needed to provide incentives to potential users for participating in such networks. These mechanisms may range from simple non-monetary access rights, monetary payments to specific policies for sharing. Although a few models for a framework have been discussed (in the general area of a "Grid Economy"), none of these models has yet been realised in practice. This book attempts to fill this gap by discussing the reasons for such limited take-up and exploring incentive mechanisms for resource sharing in distributed systems. The purpose of this book is to identify research challenges in successfully using and deploying resource sharing strategies in open-source and commercial distributed systems.
Resumo:
Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The transport of stratospheric air into the troposphere within deep convection was investigated using the Met Office Unified Model version 6.1. Three cases were simulated in which convective systems formed over the UK in the summer of 2005. For each of these three cases, simulations were performed on a grid having 4 km horizontal grid spacing in which the convection was parameterized and on a grid having 1 km horizontal grid spacing, which permitted explicit representation of the largest energy-containing scales of deep convection. Cross-tropopause transport was diagnosed using passive tracers that were initialized above the dynamically defined tropopause (2 potential vorticity unit surface) with a mixing ratio of 1. Although the synoptic-scale environment and triggering mechanisms varied between the cases, the total simulated transport was similar in all three cases. The total stratosphere-to-troposphere transport over the lifetime of the convective systems ranged from 25 to 100 kg/m2 across the simulated convective systems and resolutions, which corresponds to ∼5–20% of the total mass located within a stratospheric column extending 2 km above the tropopause. In all simulations, the transport into the lower troposphere (defined as below 3.5 km elevation) accounted for ∼1% of the total transport across the tropopause. In the 4 km runs most of the transport was due to parameterized convection, whereas in the 1 km runs the transport was due to explicitly resolved convection. The largest difference between the simulations with different resolutions occurred in the one case of midlevel convection considered, in which the total transport in the 1 km grid spacing simulation with explicit convection was 4 times that in the 4 km grid spacing simulation with parameterized convection. Although the total cross-tropopause transport was similar, stratospheric tracer was deposited more deeply to near-surface elevations in the convection-parameterizing simulations than in convection-permitting simulations.
Resumo:
The service-oriented approach to performing distributed scientific research is potentially very powerful but is not yet widely used in many scientific fields. This is partly due to the technical difficulties involved in creating services and workflows and the inefficiency of many workflow systems with regard to handling large datasets. We present the Styx Grid Service, a simple system that wraps command-line programs and allows them to be run over the Internet exactly as if they were local programs. Styx Grid Services are very easy to create and use and can be composed into powerful workflows with simple shell scripts or more sophisticated graphical tools. An important feature of the system is that data can be streamed directly from service to service, significantly increasing the efficiency of workflows that use large data volumes. The status and progress of Styx Grid Services can be monitored asynchronously using a mechanism that places very few demands on firewalls. We show how Styx Grid Services can interoperate with with Web Services and WS-Resources using suitable adapters.
Resumo:
A multi-layered architecture of self-organizing neural networks is being developed as part of an intelligent alarm processor to analyse a stream of power grid fault messages and provide a suggested diagnosis of the fault location. Feedback concerning the accuracy of the diagnosis is provided by an object-oriented grid simulator which acts as an external supervisor to the learning system. The utilization of artificial neural networks within this environment should result in a powerful generic alarm processor which will not require extensive training by a human expert to produce accurate results.